content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Variational Procedure of Deriving Diffusion Equation for Spreading in Porous Media
Variational Procedure of Deriving Diffusion Equation for Spreading in Porous Media ()
1. Introduction
The problem of deriving the governing equations of spreading matters in porous media, derived under different propositions was considered in [1-3]. Thus in [1], we thoroughly used the fact that
centered Gaussian processes are completely determined by the (two points) correlation function. A multipoint correlation appears when a linear fixed-point equation is averaged. That each of these
correlations splits into a finite (but increasing with the number of points) number of products of correlation functions was quite important for us. It is possible to use the diagrammatic method with
more general processes [3]. Then we have much more terms in the expansions. Even if we replace the coefficients of the fixed-point equation for u by functions of a centered Gaussian process, the
method is not of a simple use. It is possible to obtain similar results via a variational method, which we will explain for the example of problem, already considered in [1,2]. Not surprisingly, we
then will retrieve the already obtained macroscopic equation. Then, we will use the variational method for a variant of equation including functions of a centered Gaussian process.
2. Mathematical Model
2.1. Statement of the Method
The idea goes back to so-called variational (functional) derivatives [4].
is named a variational (or functional) derivative. It is obvious that this derivative is also a functional that depends on function
This is again a functional with respect to
In the case when functional
And with
Also notice that the functional derivative of functional
For following purposes we take the functional
Functional Taylor’s series for functional
2.2. Application of Variational Method
In [1-3] we used the special summation procedure of diagrams of a certain type and received the diffusion equation with fractional derivatives. In this section, we will exploit another method for to
receive the self-contained systems of governing equations for diffusion problems that we have used also in [5]. We again consider the spreading of matter in a porous medium such (1) rules the
particles transfer on the small scale and we repeat this equation here once more:
Averaging with respect to realizations
This equation contains the unknown
By substituting all above into (1), we derive:
We see here that together with averaged concentration
This equation contains the unknown function
We should have written here “and so on”, since after that we should take the functional derivative over
The structure of the thus obtained equations is such that the k-th equation in the hierarchy, with unknown
Here we consider random media, where cumulant functions entering Equation (4) are of the even order
We mentioned that if porosity represents itself a normal random field, then (5) is exact because all cumulant except one are zero. As we consider small porosity fluctuations, then in the right-hand
side of (5) the third and the fourth addend in the right-hand part, containing
This leads us immediately to the following initial condition for
Indeed, we have
In order to give the integrals on the right hand-side of (6) a concrete meaning, we now specialize the Gaussian process
3. Basic Equation Evolution
Let us assume that the random field
Our ultimate aim is to derive an equation related only to the mean concentration
However, such a procedure is rather intricate in
In the last relations the argument
Also note, that if we choose
From this we compute
with A and B being defined by
Thus, we have explicit solutions to (6) and (7) in
Hence successive approximation method and functional derivative method lead us to similar results for problems such that both methods are available.
4. Conclusion
So, in this paper, we presented an example of how to use the notion of a functional derivative in order to arrive at a macroscopic equation for dispersion in disordered media. In fact, we also used
the present method for equation, which had been derived for a different type of disordered medium, made of inter-twisted tubes, such that a onedimensional approach has physical meaning. Hence, the
example can only serve formally for two reasons. Indeed, the sample paths of Gaussian processes can take negative values, which are not good when the existence of solutions is needed. The drawback
can be removed by considering ε being replaced by the exponential of a Gaussian process. We will not do it, but the presented method works fairly well for this case and gives the results already
obtained via Feynman diagrams in our work [1].
|
{"url":"https://www.scirp.org/journal/paperinformation?paperid=37484","timestamp":"2024-11-08T17:49:31Z","content_type":"application/xhtml+xml","content_length":"104543","record_id":"<urn:uuid:5c793811-4b7a-407c-8e5c-7fafb7e5c942>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00116.warc.gz"}
|
How To Increase Solving Speed In Quantitative Aptitude?
June 10, 2022
How To Increase Solving Speed In Quantitative Aptitude
Quantitative Aptitude is one of the most important aspects of placement examination. It helps candidates to demonstrate their thinking, problem-solving and quick decision-making abilities to the
But many candidates struggle to complete the Quantitative Aptitude sections in time due to the length and complexity of calculations. This led to many candidates trying and searching for methods to
improve their speed in the Quantitative Aptitude section.
That’s why we have written this article to help students identify and understand the approaches and techniques on how to increase solving speed in Quantitative Aptitude.
How To Increase Solving Speed In Quantitative Aptitude Section
These are the methods you can use to increase solving speed in Quantitative Aptitude:
Step 1 – Understand the Complete Syllabus
The initial stage in any significant endeavour has always been the most crucial. Because the first one determines the overall trajectory of the excursion. The route of the Quantitative Aptitude
examination is equivalent. Without question, it is a challenging task packed with peaks and troughs as well as significant difficulties.
The Quantitative Aptitude syllabus is comprehensive, diversified, and, in some instances, open-ended. It is also essential to understand that the syllabus is neither limited nor extensive. If you
understand the syllabus completely, you can devise a reasonable plan for your preparation that will require fewer hours, explore more topics, and practice more to improve your solving speed.
Step 2 – Improve Your Weaker Areas
It is natural for students to have topics in which they are weaker in Quantitative Aptitude. After all, various problem-solving approaches or abilities might well be required for different topics.
The dilemma then becomes whether students should concentrate on refining a topic or on improving weaker concepts.
Students, in our opinion, should concentrate on their weakest subjects. This is due to the fact that having a topic in which the student feels inadequate would induce unnecessary tension during the
examination. As a result, they may be able to perform at a degree beyond their greatest or in other words it can hamper their ability to solve questions with a good calculation speed.
It is advised that a student must starts from essential questions for their weaker topics. The objective is to guarantee that students have a clear understanding of their basics and that they acquire
confidence as they answer each subsequent question while improving their calculation speed.
Step 3 – Learn Speed Maths & Vedic Mathematics Tricks
The term ‘Vedic’ is derived from the Sanskrit word ‘Veda,’ which denotes ‘Knowledge.’ Vedic Maths is a remarkable compilation of sutras for solving maths problems quickly and efficiently. If you go
through any placement exam paper, you will see that there are many questions that can be answered efficiently and quickly by utilising these Vedic Maths Tricks.
Vedic Maths is no piece of cake as it requires extensive practice. These approaches may appear complex or challenging at first, but they will execute excellently when you begin your calculations with
practice. Using Vedic Maths, problems are simplified to one-line answers. It can be learned and developed quickly and effectively. Compared to traditional methods, the calculations are quicker and
more reliable.
Step 4 – Be Thorough with Tables, Squares & Cubes
Tables, squares, cubes, and so on are typically used to speed up calculations. Memorising and practising will help you remember them. You must sometimes uncover relationships between numbers and, on
other occasions, just memorise.
Tables, squares, and cubes can be thought of as tools, similar to a screwdriver used to twist a screw. Understanding the principles and practising them leads to mastery of tables, squares, and cubes.
Experts recommend memorising tables up to 20, squares up to 25, and cubes up to 10 to improve speed in your calculations.
Step 5 – Observe & Understand the Question
Quantitative Aptitude is a challenging concept to master. The most crucial facet that might make it more difficult for students is that they do not properly understand the question before answering
it. To not fall into the trap of solving without understanding the question, follow the process mentioned underneath.
Pay close attention to what the question is asking. Understand it and attempt to picture it in your mind. After that, figure out which concepts you can apply. Is it necessary to use addition or the
Remainder theorem, for example? You can go to the following step if you understand the problem and know which concept to use.
Attempting to tackle it in a single step might be a daunting task. Instead, seek an approach that breaks down the problem into smaller chunks and arrives at a solution.
Step 6 – Incorporating the Right Shortcuts/Methods
Tricks and shortcut methods can help you solve the Quantitative Aptitude section quickly.
These methods give students the confidence that they are approaching the solution to a problem faster. Today’s students have a multitude of shortcuts and techniques at their disposal, and
understanding which to employ is half the battle. Students might utilise several shortcut methods to answer the problem depending on circumstances. Students acquire problem-solving abilities and grow
more comfortable exploring for new answers when they understand how to get what they want.
Step 7 – Practice
When you see or read anything only once, you don’t learn it, at least not quite enough to retain it permanently. It may engage you for a few more encounters, but you easily forget about it and move
on to something else.
While the ageing process has an impact on our recollection, there is still a lot we can really do to help us remember more when we want to study. For generations, repetition has been utilised as a
memorisation technique. The right kind of repetition could assist your memory significantly. To study Quantitative Aptitude for placement exams, you must practice all you have learnt.
If knowledge is repeated or revisited on a frequent basis but at gradually increasing intervals, it gets transferred to another brain region to be retained in long-term memory. That becomes
entrenched with duration. Every time you discover something new and practice it, you strengthen your memory’s specific learning or behaviour and make it simpler to remember or recall.
Final Words
We hope this article helps students identify and understand the approaches and techniques on how to increase solving speed in Quantitative Aptitude. If you have any queries regarding the techniques,
feel free to drop a comment in the comments section. We wish you all the best in your preparations.
Frequently Asked Questions
These are the frequently asked questions regarding how to increase solving speed in Quantitative Aptitude:
1. Why does solving speed matter in acing the quantitative aptitude section?
Solving speed matters in acing the Quantitative Aptitude section because most of the Quantitative Aptitude section is set in a time-constrained environment, making it absolutely necessary to solve
all questions in time.
2. How can I get started in improving the solving speed?
You can start by understanding the whole syllabus, then identifying your strong and weak points, and then improving them by learning different shortcut methods and tables to improve your calculation
3. What are the factors that affect the solving speed?
The factors that affect the solving speed include the ability to break down a problem into smaller parts, approaching each part as a separate entity and identifying which shortcut method or technique
can help get the desired results.
4. How does concept clarification go hand in hand with solving speed?
Concept clarification goes hand in hand with solving speed because understanding a concept thoroughly can help students approach the problem without any doubt, improving the solving speed.
Explore Quantitative Aptitude Guides
Explore More Quantitative Aptitude Resources
|
{"url":"https://www.placementpreparation.io/blog/how-to-increase-solving-speed-in-quantitative-aptitude/","timestamp":"2024-11-02T23:28:14Z","content_type":"text/html","content_length":"152734","record_id":"<urn:uuid:1312a883-3bf4-46d7-b0c6-0af5e7840936>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00249.warc.gz"}
|
A147640 - OEIS
The standard way to search for records in the ABC conjecture is to run with the C parameter through all the integers
. If this search space is diluted by admitting only powers of 2 as in
, the sequence of records changes. This sequence here lists the A such that the triples (A=a(n), B=
(n), C=
(n)) locate records for this search when C is restricted to powers of 2.
|
{"url":"https://oeis.org/A147640","timestamp":"2024-11-02T05:20:43Z","content_type":"text/html","content_length":"14370","record_id":"<urn:uuid:55f695d2-e320-4267-b557-b8dbef65a30a>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00574.warc.gz"}
|
Multiply And Divide Complex Numbers Worksheet
Multiply And Divide Complex Numbers Worksheet serve as fundamental devices in the realm of mathematics, providing an organized yet flexible platform for students to discover and master mathematical
ideas. These worksheets supply a structured strategy to understanding numbers, nurturing a strong structure upon which mathematical effectiveness flourishes. From the most basic checking exercises to
the intricacies of sophisticated computations, Multiply And Divide Complex Numbers Worksheet deal with learners of varied ages and skill degrees.
Introducing the Essence of Multiply And Divide Complex Numbers Worksheet
Multiply And Divide Complex Numbers Worksheet
Multiply And Divide Complex Numbers Worksheet -
Web 169 7 r2p0 K182k 7K 6u Xtra 0 3Swoofxt lw Ja mrKez YLpLHCx d i 6A7lSlX Ir AiTg LhBtls f HrKeis feQrmvTeyd 2 j c BMda ud Leb QwWirt Yhq mISn9f OihnOi6t2e 9 KAmlsg
Web Multiplying and Dividing Complex Numbers Simplify 5 4 5 7 3 4 2 4 2 2 2 4 3 6 6 5 4 2 4 2 5 2 8 3 5
At their core, Multiply And Divide Complex Numbers Worksheet are vehicles for conceptual understanding. They envelop a myriad of mathematical principles, leading learners with the maze of numbers
with a series of interesting and deliberate exercises. These worksheets go beyond the borders of conventional rote learning, encouraging active interaction and cultivating an intuitive grasp of
mathematical partnerships.
Supporting Number Sense and Reasoning
Complex Numbers Worksheet Answer Key
Complex Numbers Worksheet Answer Key
Web 27 avr 2022 nbsp 0183 32 Multiply and divide complex numbers in trigonometric polar form Liveworksheets transforms your traditional printable worksheets into self correcting
Web Complex Number Worksheets pdf s with answer keys Complex Number Calculator Calculator will divide multiply add and subtract any 2 complex numbers How to add subtract multiply and simplify complex
The heart of Multiply And Divide Complex Numbers Worksheet depends on growing number sense-- a deep comprehension of numbers' meanings and interconnections. They encourage exploration, inviting
students to dissect arithmetic procedures, figure out patterns, and unlock the enigmas of sequences. Through thought-provoking challenges and sensible problems, these worksheets end up being portals
to refining thinking abilities, supporting the analytical minds of budding mathematicians.
From Theory to Real-World Application
50 Multiplying Complex Numbers Worksheet
50 Multiplying Complex Numbers Worksheet
Web Improve your math knowledge with free questions in quot Add subtract multiply and divide complex numbers quot and thousands of other math skills
Web Divide the following complex numbers 6 8 i 4 3 i Stuck Review related articles videos or use a hint Report a problem Learn for free about math art computer programming economics physics chemistry
biology medicine finance history and more Khan
Multiply And Divide Complex Numbers Worksheet serve as avenues connecting theoretical abstractions with the palpable truths of everyday life. By instilling useful situations right into mathematical
workouts, learners witness the relevance of numbers in their environments. From budgeting and dimension conversions to recognizing statistical information, these worksheets encourage students to
possess their mathematical expertise past the confines of the class.
Diverse Tools and Techniques
Adaptability is inherent in Multiply And Divide Complex Numbers Worksheet, utilizing a collection of pedagogical tools to cater to diverse understanding styles. Aesthetic help such as number lines,
manipulatives, and electronic resources act as buddies in imagining abstract concepts. This varied strategy makes sure inclusivity, suiting students with different choices, staminas, and cognitive
Inclusivity and Cultural Relevance
In an increasingly diverse world, Multiply And Divide Complex Numbers Worksheet embrace inclusivity. They go beyond cultural boundaries, integrating instances and issues that reverberate with
students from diverse backgrounds. By incorporating culturally pertinent contexts, these worksheets cultivate an environment where every student feels represented and valued, boosting their
connection with mathematical concepts.
Crafting a Path to Mathematical Mastery
Multiply And Divide Complex Numbers Worksheet chart a program in the direction of mathematical fluency. They impart perseverance, crucial thinking, and problem-solving abilities, crucial qualities
not just in mathematics however in different facets of life. These worksheets equip students to browse the intricate surface of numbers, nurturing a profound gratitude for the elegance and reasoning
inherent in maths.
Welcoming the Future of Education
In an age marked by technical development, Multiply And Divide Complex Numbers Worksheet flawlessly adjust to digital platforms. Interactive interfaces and electronic sources augment typical knowing,
offering immersive experiences that go beyond spatial and temporal limits. This combinations of standard approaches with technical technologies proclaims a promising age in education, cultivating an
extra dynamic and appealing knowing environment.
Final thought: Embracing the Magic of Numbers
Multiply And Divide Complex Numbers Worksheet exemplify the magic inherent in mathematics-- a charming journey of exploration, discovery, and mastery. They go beyond standard rearing, working as
drivers for sparking the fires of inquisitiveness and inquiry. Via Multiply And Divide Complex Numbers Worksheet, students embark on an odyssey, unlocking the enigmatic globe of numbers-- one issue,
one solution, at once.
Multiplying Complex Numbers Worksheet
Multiplying And Dividing Complex Numbers Worksheet Kidsworksheetfun
Check more of Multiply And Divide Complex Numbers Worksheet below
50 Multiplying Complex Numbers Worksheet
Dividing Complex Numbers Worksheet Printable Worksheet
Multiplying And Dividing Complex Numbers Worksheet Kidsworksheetfun
Multiplying Complex Numbers examples Solutions Videos Worksheets Activities
Multiplying And Dividing Complex Numbers Teaching Resources
Multiply And Divide Integers Worksheet
Multiplying And Dividing Complex Numbers Effortless Math
Web Multiplying and Dividing Complex Numbers Simplify 5 4 5 7 3 4 2 4 2 2 2 4 3 6 6 5 4 2 4 2 5 2 8 3 5
Multiplying And Dividing Complex Numbers Worksheets
Web Guides students solving equations that involve an Multiplying and Dividing Complex Numbers Demonstrates answer checking Example Multiply 5 4i 7 3i View worksheet
Web Multiplying and Dividing Complex Numbers Simplify 5 4 5 7 3 4 2 4 2 2 2 4 3 6 6 5 4 2 4 2 5 2 8 3 5
Web Guides students solving equations that involve an Multiplying and Dividing Complex Numbers Demonstrates answer checking Example Multiply 5 4i 7 3i View worksheet
Multiplying Complex Numbers examples Solutions Videos Worksheets Activities
Dividing Complex Numbers Worksheet Printable Worksheet
Multiplying And Dividing Complex Numbers Teaching Resources
Multiply And Divide Integers Worksheet
Multiplying And Dividing Complex Numbers Worksheets
Multiplying And Dividing Complex Numbers Worksheets
Multiplying And Dividing Complex Numbers Worksheets
40 Multiplying Complex Numbers Worksheet Worksheet For Fun
|
{"url":"https://szukarka.net/multiply-and-divide-complex-numbers-worksheet","timestamp":"2024-11-09T01:13:40Z","content_type":"text/html","content_length":"25221","record_id":"<urn:uuid:b38e44ce-2657-41a9-b97f-f32e5fecd712>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00597.warc.gz"}
|
1) a island has a population of 20,000 birds. A disease star
1) a island has a population of 20,000 birds
Subject:MathPrice:3.86 Bought12
1) a island has a population of 20,000 birds. A disease starts where 10 birds have gotten infected, and 4 days later, 200 birds are infected. the rate at which the disease spreads is proportional to
the number of birds infected and the quotient of the number of birds not yet infected and the population of the island.
(I know, The quotient of a and b is ab but am struggling at this part a bit)
a) Find an equation for the number of birds infected at time t (t in days).
b) how many birds are infected two weeks into the disease
Option 1
Low Cost Option
Download this past answer in few clicks
3.86 USD
Option 2
Custom new solution created by our subject matter experts
|
{"url":"https://studyhelpme.com/question/43007/1-a-island-has-a-population-of-20000-birds-A-disease-starts-where-10-birds-have-gotten-infected","timestamp":"2024-11-06T06:01:45Z","content_type":"text/html","content_length":"68998","record_id":"<urn:uuid:63623b14-96d4-46e6-ba15-ddf02bbaf99e>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00644.warc.gz"}
|
Geophysical data processing
The aim of this course is the numerical mathematics methods used in applied geophysics. Exercises are based on use of MATLAB program.
Numerical methods of solving systems of linear equations, including and Gauss generalized inversion.
The basic methods of approximation and interpolation of numerical data seiries.
The basics of numerical derivation and integration.
Fourier transform, its discrete implementation.
The nunerical methods taught are so general that they can be used in virtually all disciplines.
For students interested in the follow-up master's study of Applied Geology, focusing on applied geophysics, the completion of this course is practically necessary to understand a number of data
processing methods in applied geophysics.
The course is strongly practically oriented, MATLAB is used to perform calculations. Prior knowledge of MATLAB is an advantage for students, but it is generally assumed that students without previous
knowledge of MATLAB also apply.
|
{"url":"https://explorer.cuni.cz/class/MG452P21?query=Peat","timestamp":"2024-11-02T12:28:14Z","content_type":"text/html","content_length":"33701","record_id":"<urn:uuid:6e8e6a3a-d1b0-4187-89f2-12bfa8068120>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00755.warc.gz"}
|
What is 3phase half wave rectifier?
3 Phase Half Wave Rectifier In three phase half wave rectifier, three diodes are connected to each of the three phase of secondary winding of the transformer. The three phases of secondary are
connected in the form of star thus it is also called Star Connected Secondary.
What are the difference of 3 phase bridge and half wave rectifiers?
A 3-phase full-wave rectifier is obtained by using two half-wave rectifier circuits. The advantage here is that the circuit produces a lower ripple output than the previous half-wave 3-phase
rectifier as it has a frequency of six times the input AC waveform.
What is the formula for output voltage for 3 phase full wave bridge rectifier?
DC output voltage of a 3 phase bridge rectifier is 1.654 Vm or 1.3505 V ll.
What is 3 phase full wave controlled rectifier?
three phase full converter is a fully controlled bridge controlled rectifier using six thyristors connected in the form of a full wave bridge configuration. All the six thyristors are controlled
switches which are turned on at a appropriate times by applying suitable gate trigger signals.
What is 2 pulse rectifier?
A single-phase, full-wave rectifier (regardless of design, center-tap or bridge) would be called a 2-pulse rectifier because it outputs two pulses of DC during one AC cycle’s worth of time. A
three-phase full-wave rectifier would be called a 6-pulse unit.
What is full wave rectifier?
A full wave rectifier is defined as a type of rectifier that converts both halves of each cycle of an alternating wave (AC signal) into a pulsating DC signal. Full-wave rectifiers are used to convert
AC voltage to DC voltage, requiring multiple diodes to construct.
What happens if rectifier goes bad?
In general, there are two primary ways that the regulator rectifier can fail. First, the diode can burnout and cause the battery to drain. around 13 volts, the bike will start to drain the battery.
When this happens, it’s only a matter of time before the engine stops completely.
How is a half wave 3 phase rectifier constructed?
A half-wave 3-phase rectifier is constructed using three individual diodes and a 120VAC 3-phase star connected transformer. If it is required to power a connected load with an impedance of 50Ω,
Calculate, a) the average DC voltage output to the load. b) the load current, c) the average current per diode. Assume ideal diodes. a).
What is the ripple factor for 3 phase half wave?
The ripple factor for 3 phase half wave rectifier is derived in the equations below. It is evident from the above calculations that the ripple factor for the 3 phase half wave rectifier is 0.17 i.e.
How many diodes are used in a full wave rectifier?
In three phase full wave rectifier six diodes are used. It is also called 6-diode half wave rectifier. In this each diode conducts for 1/6 th part of the AC cycle. The output DC voltage fluctuations
are less in 3 phase full wave rectifiers.
What is the peak inverse voltage of half wave rectifier?
Peak Inverse Voltage of Half Wave Rectifier Peak Inverse Voltage (PIV) is the maximum voltage that the diode can withstand during reverse bias condition. If a voltage is applied more than the PIV,
the diode will be destroyed. Form Factor of Half Wave Rectifier
|
{"url":"https://pegaswitch.com/lifehacks/what-is-3phase-half-wave-rectifier/","timestamp":"2024-11-06T10:34:24Z","content_type":"text/html","content_length":"128074","record_id":"<urn:uuid:1e5df28b-3274-4237-8afc-ec79a752f2f0>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00149.warc.gz"}
|
Intuitive Understanding of Sine Waves
Sine waves confused me. Yes, I can mumble "SOH CAH TOA" and draw lines within triangles. But what does it mean?
I was stuck thinking sine had to be extracted from other shapes. A quick analogy:
You: Geometry is about shapes, lines, and so on.
Alien: Oh? Can you show me a line?
You (looking around): Uh... see that brick, there? A line is one edge of that brick.
Alien: So lines are part of a shape?
You: Sort of. Yes, most shapes have lines in them. But a line is a basic concept on its own: a beam of light, a route on a map, or even--
Alien: Bricks have lines. Lines come from bricks. Bricks bricks bricks.
Most math classes are exactly this. "Circles have sine. Sine comes from circles. Circles circles circles."
Argh! No - circles are one example of sine. In a sentence: Sine is a natural sway, the epitome of smoothness: it makes circles "circular" in the same way lines make squares "square".
Let's build our intuition by seeing sine as its own shape, and then understand how it fits into circles and the like. Onward!
Sine vs Lines
Remember to separate an idea from an example: squares are examples of lines. Sine clicked when it became its own idea, not "part of a circle."
Let's observe sine in a simulator:
Hubert will give the tour:
• Click start. Go, Hubert go! Notice that smooth back and forth motion? That's Hubert, but more importantly (sorry Hubert), that's sine! It's natural, the way springs bounce, pendulums swing,
strings vibrate... and many things move.
• Change "vertical" to "linear". Big difference -- see how the motion gets constant and robotic, like a game of pong?
Let's explore the differences with video:
• Linear motion is constant: we go a set speed and turn around instantly. It's the unnatural motion in the robot dance (notice the linear bounce with no slowdown vs. the strobing effect).
• Sine changes its speed: it starts fast, slows down, stops, and speeds up again. It's the enchanting smoothness in liquid dancing (human sine wave and natural bounce).
Unfortunately, textbooks don't show sine with animations or dancing. No, they prefer to introduce sine with a timeline (try setting "horizontal" to "timeline"):
Egads. This is the schematic diagram we've always been shown. Does it give you the feeling of sine? Not any more than a skeleton portrays the agility of a cat. Let's watch sine move and then chart
its course.
The Unavoidable Circle
Circles have sine. Yes. But seeing the sine inside a circle is like getting the eggs back out of the omelette. It's all mixed together!
Let's take it slow. In the simulation, set Hubert to vertical:none and horizontal: sine*. See him wiggle sideways? That's the motion of sine. There's a small tweak: normally sine starts the cycle at
the neutral midpoint and races to the max. This time, we start at the max and fall towards the midpoint. Sine that "starts at the max" is called cosine, and it's just a version of sine (like a
horizontal line is a version of a vertical line).
Ok. Time for both sine waves: put vertical as "sine" and horizontal as "sine*". And... we have a circle!
A horizontal and vertical "spring" combine to give circular motion. Most textbooks draw the circle and try to extract the sine, but I prefer to build up: start with pure horizontal or vertical motion
and add in the other.
Quick Q & A
A few insights I missed when first learning sine:
Sine really is 1-dimensional
Sine wiggles in one dimension. Really. We often graph sine over time (so we don't write over ourselves) and sometimes the "thing" doing sine is also moving, but this is optional! A spring in one
dimension is a perfectly happy sine wave.
(Source: Wikipedia, try not to get hypnotized.)
Circles are an example of two sine waves
Circles and squares are a combination of basic components (sines and lines). The circle is made from two connected 1-d waves, each moving the horizontal and vertical direction.
(Source http://1ucasvb.tumblr.com/)
But remember, circles aren't the origin of sines any more than squares are the origin of lines. They're examples of two sine waves working together, not their source.
What do the values of sine mean?
Sine cycles between -1 and 1. It starts at 0, grows to 1.0 (max), dives to -1.0 (min) and returns to neutral. I also see sine like a percentage, from 100% (full steam ahead) to -100% (full retreat).
What's is the input 'x' in sin(x)?
Tricky question. Sine is a cycle and x, the input, is how far along we are in the cycle.
Let's look at lines:
• You're traveling on a square. Each side takes 10 seconds.
• After 1 second, you are 10% complete on that side
• After 5 seconds, you are 50% complete
• After 10 seconds, you finished the side
Linear motion has few surprises. Now for sine (focusing on the "0 to max" cycle):
• We're traveling on a sine wave, from 0 (neutral) to 1.0 (max). This portion takes 10 seconds.
• After 5 seconds we are... 70% complete! Sine rockets out of the gate and slows down. Most of the gains are in the first 5 seconds
• It takes 5 more seconds to get from 70% to 100%. And going from 98% to 100% takes almost a full second!
Despite our initial speed, sine slows so we gently kiss the max value before turning around. This smoothness makes sine, sine.
For the geeks: Press "show stats" in the simulation. You'll see the percent complete of the total cycle, mini-cycle (0 to 1.0), and the value attained so far. Stop, step through, and switch between
linear and sine motion to see the values.
Quick quiz: What's further along, 10% of a linear cycle, or 10% of a sine cycle? Sine. Remember, it barrels out of the gate at max speed. By the time sine hits 50% of the cycle, it's moving at the
average speed of linear cycle, and beyond that, it goes slower (until it reaches the max and turns around).
So x is the 'amount of your cycle'. What's the cycle?
It depends on the context.
• Basic trig: 'x' is degrees, and a full cycle is 360 degrees
• Advanced trig: 'x' is radians (they are more natural!), and a full cycle is going around the unit circle (2*pi radians)
Play with values of x here:
But again, cycles depend on circles! Can we escape their tyranny?
Pi without Pictures
Imagine a sightless alien who only notices shades of light and dark. Could you describe pi to it? It's hard to flicker the idea of a circle's circumference, right?
Let's step back a bit. Sine is a repeating pattern, which means it must... repeat! It goes from 0, to 1, to 0, to -1, to 0, and so on.
Let's define pi as the time sine takes from 0 to 1 and back to 0. Whoa! Now we're using pi without a circle too! Pi is a concept that just happens to show up in circles:
• Sine is a gentle back and forth rocking
• Pi is the time from neutral to max and back to neutral
• n * Pi (0 * Pi, 1 * pi, 2 * pi, and so on) are the times you are at neutral
• 2 * Pi, 4 * pi, 6 * pi, etc. are full cycles
Aha! That is why pi appears in so many formulas! Pi doesn't "belong" to circles any more than 0 and 1 do -- pi is about sine returning to center! A circle is an example of a shape that repeats and
returns to center every 2*pi units. But springs, vibrations, etc. return to center after pi too!
Question: If pi is half of a natural cycle, why isn't it a clean, simple number?
Let's answer a question with a question. Why does a 1x1 square have a diagonal of length $\sqrt{2} = 1.414...$ (an irrational number)?
It's philosophically inconvenient when nature doesn't line up with our number system. I don't have a good intuition. My hunch is simple rules (1x1 square + Pythagorean Theorem) can still lead to
complex outcomes.
How fast is sine?
I've been tricky. Previously, I said "imagine it takes sine 10 seconds from 0 to max". And now it's pi seconds from 0 to max back to 0? What gives?
• sin(x) is the default, off-the-shelf sine wave, that indeed takes pi units of time from 0 to max to 0 (or 2*pi for a complete cycle)
• sin(2x) is a wave that moves twice as fast
• sin(0.5x) is a wave that moves twice as slow
So, we use sin(n*x) to get a sine wave cycling as fast as we need. Often, the phrase "sine wave" is referencing the general shape and not a specific speed.
Part 2: Understanding the definitions of sine
That's a brainful -- take a break if you need it. Hopefully, sine is emerging as its own pattern. Now let's develop our intuition by seeing how common definitions of sine connect.
Definition 1: The height of a triangle / circle!
Sine was first found in triangles. You may remember "SOH CAH TOA" as a mnemonic
• SOH: Sine is Opposite / Hypotenuse
• CAH: Cosine is Adjacent / Hypotenuse
• TOA: Tangent is Opposite / Adjacent
For a right triangle with angle x, sin(x) is the length of the opposite side divided by the hypotenuse. If we make the hypotenuse 1, we can simplify to:
• Sine = Opposite
• Cosine = Adjacent
And with more cleverness, we can draw our triangles with hypotenuse 1 in a circle with radius 1:
Voila! A circle containing all possible right triangles (since they can be scaled up using similarity). For example:
• sin(45) = .707
• Lay down a 10-foot pole and raise it 45 degrees. It is 10 * sin(45) = 7.07 feet off the ground
• An 8-foot pole would be 8 * sin(45) = 5.65 feet
These direct manipulations are great for construction (the pyramids won't calculate themselves). Unfortunately, after thousands of years we start thinking the meaning of sine is the height of a
triangle. No no, it's a shape that shows up in circles (and triangles).
Realistically, for many problems we go into "geometry mode" and start thinking "sine = height" to speed through things. That's fine -- just don't get stuck there.
Definition 2: The infinite series
I've avoided the elephant in the room: how in blazes do we actually calculate sine!? Is my calculator drawing a circle and measuring it?
Glad to rile you up. Here's the circle-less secret of sine:
Sine is acceleration opposite to your current position
Using our bank account metaphor: Imagine a perverse boss who gives you a raise the exact opposite of your current bank account! If you have \$50 in the bank, then your raise next week is \$50. Of
course, your income might be \$75/week, so you'll still be earning some money \$75 - \$50 for that week), but eventually your balance will decrease as the "raises" overpower your income.
But never fear! Once your account hits negative (say you're at \$50), then your boss gives a legit \$50/week raise. Again, your income might be negative, but eventually the raises will overpower it.
This constant pull towards the center keeps the cycle going: when you rise up, the "pull" conspires to pull you in again. It also explains why neutral is the max speed for sine: If you are at the
max, you begin falling and accumulating more and more "negative raises" as you plummet. As you pass through then neutral point you are feeling all the negative raises possible (once you cross, you'll
start getting positive raises and slowing down).
By the way: since sine is acceleration opposite to your current position, and a circle is made up of a horizontal and vertical sine... you got it! Circular motion can be described as "a constant pull
opposite your current position, towards your horizontal and vertical center".
Geeking Out With Calculus
Let's describe sine with calculus. Like e, we can break sine into smaller effects:
• Start at 0 and grow at unit speed
• At every instant, get pulled back by negative acceleration
How should we think about this? See how each effect above changes our distance from center:
• Our initial kick increases distance linearly: y (distance from center) = x (time taken)
• At any moment, we feel a restoring force of $-x$. We integrate twice to turn negative acceleration into distance:
$\displaystyle{ \iint -x = \frac{-x^3}{3!} }$
Seeing how acceleration impacts distance is like seeing how a raise hits your bank account. The "raise" must change your income, and your income changes your bank account (two integrals "up the
So, after "x" seconds we might guess that sine is "x" (initial impulse) minus $\frac{x^3}{3!}$ (effect of the acceleration):
Something's wrong -- sine doesn't nosedive! With e, we saw that "interest earns interest" and sine is similar. The "restoring force" changes our distance by $\frac{-x^3}{3!}$, which creates another
restoring force to consider. Consider a spring: the pull that yanks you down goes too far, which shoots you downward and creates another pull to bring you up (which again goes too far). Springs are
We need to consider every restoring force:
• $y = x$ is our initial motion, which creates a restoring force of impact...
• $y = \frac{-x^3}{3!}$ which creates a restoring force of impact...
• $y = \frac{x^5}{5!}$ which creates a restoring force of impact...
• $y = \frac{-x^7}{7!}$ which creates a restoring force of impact...
Just like e, sine can be described with an infinite series:
$\displaystyle{\sin(x) = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + ... }$
I saw this formula a lot, but it only clicked when I saw sine as a combination of an initial impulse and restoring forces. The initial push (y = x, going positive) is eventually overcome by a
restoring force (which pulls us negative), which is overpowered by its own restoring force (which pulls us positive), and so on.
A few fun notes:
• Consider the "restoring force" like "positive or negative interest". This makes the sine/e connection in Euler's formula easier to understand. Sine is like e, except sometimes it earns negative
interest. There's more to learn here :).
• For very small angles, "y = x" is a good guess for sine. We just take the initial impulse and ignore any restoring forces.
The Calculus of Cosine
Cosine is just a shifted sine, and is fun (yes!) now that we understand sine:
• Sine: Start at 0, initial impulse of y = x (100%)
• Cosine: Start at 1, no initial impulse
So cosine just starts off... sitting there at 1. We let the restoring force do the work:
$\displaystyle{y = 1 - \frac{x^2}{2!}}$
Again, we integrate -1 twice to get $\frac{-x^2}{2!}$. But this kicks off another restoring force, which kicks off another, and before you know it:
$\displaystyle{\cos(x) = 1 - \frac{x^2}{2!} + \frac{x^4}{4!} - \frac{x^6}{6!} + ...}$
Definition 3: The differential equation
We've described sine's behavior with specific equations. A more succinct way (equation):
$\displaystyle{y'' = -y}$
This beauty says:
• Our current position is y
• Our acceleration (2nd derivative, or y'') is the opposite of our current position (-y)
Both sine and cosine make this true. I first hated this definition; it's so divorced from a visualization. I didn't realize it described the essence of sine, "acceleration opposite your position".
And remember how sine and e are connected? Well, $e^x$ can be be described by (equation):
$\displaystyle{y'' = y}$
The same equation with a positive sign ("acceleration equal to your position")! When sine is "the height of a circle" it's really hard to make the connection to e.
One of my great mathematical regrets is not learning differential equations. But I want to, and I suspect having an intuition for sine and e will be crucial.
Summing it up
The goal is to move sine from some mathematical trivia ("part of a circle") to its own shape:
• Sine is a smooth, swaying motion between min (-1) and max (1). Mathematically, you're accelerating opposite your position. This "negative interest" keeps sine rocking forever.
• Sine happens to appear in circles and triangles (and springs, pendulums, vibrations, sound...).
• Pi is the time from neutral to neutral in sin(x). Similarly, pi doesn't "belong" to circles, it just happens to show up there.
Let sine enter your mental toolbox (Hrm, I need a formula to make smooth changes...). Eventually, we'll understand the foundations intuitively (e, pi, radians, imaginaries, sine...) and they can be
mixed into a scrumptious math salad. Enjoy!
Using this approach, Alistair MacDonald made a great tutorial with code to build your own sine and cosine functions.
Other Posts In This Series
Topic Reference
|
{"url":"https://betterexplained.com/articles/intuitive-understanding-of-sine-waves/","timestamp":"2024-11-05T03:29:17Z","content_type":"text/html","content_length":"61052","record_id":"<urn:uuid:cc6f4c4f-cef2-4b86-b47c-77864762a40f>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00570.warc.gz"}
|
Squares Animals
[Transum: The answer to the puzzle above and the two extension questions can be found below if you have signed in as a teacher. If you scroll down this page you will see the answer area in red.]
This one was very clever you really have to think about it I was stuck one but got help then got it.
We did not realise the squares could be of different size.
Nine animals are arranged in three rows of three:
Draw three squares to contain and separate them.
Sign in to your Transum subscription account to see the answers
Extension 1
What if there were four rows of four animals?
How many squares would it take to separate them all?
Your access to the majority of the Transum resources continues to be free but you can help support the continued growth of the website by doing your Amazon shopping using the links on this page.
Below is an Amazon link. As an Amazon Associate I earn a small amount from qualifying purchases which helps pay for the upkeep of this website.
Educational Technology on Amazon
Extension 2
Put these animals into four rectangular pens so that there
is an odd number of animals in each pen.
Teacher, do your students have access to computers such as tablets, iPads or Laptops? This page was really designed for projection on a whiteboard but if you really want the students to have access
to it here is a concise URL for a version of this page without the comments:
However it would be better to assign one of the student interactive activities below.
Here is the URL which will take them to another puzzle requiring a good spacial awareness.
|
{"url":"https://transum.org/Software/SW/Starter_of_the_day/starter_January6.ASP","timestamp":"2024-11-14T23:58:44Z","content_type":"text/html","content_length":"27721","record_id":"<urn:uuid:93f7c156-17af-474d-94ac-b66b42f3a9b7>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00213.warc.gz"}
|
Mutual majority criterion - electowikiMutual majority criterionMutual majority criterion
The mutual majority criterion is a criterion for evaluating voting systems. Most simply, it can be thought of as requiring that whenever a majority of voters prefer a set of candidates (often
candidates from the same political party) above all others (i.e. when choosing among ice cream flavors, a majority of voters are split between several variants of chocolate ice cream, but agree that
any of the chocolate-type flavors are better than any of the other ice cream flavors), someone from that set must win (i.e. one of the chocolate-type flavors must win). It is the single-winner case
of Droop-Proportionality for Solid Coalitions.
It is an extension of (and also implies) the majority criterion for sets of candidates. Thus, it is often called the Majority criterion for solid coalitions.
Example for candidates A, B, C, D and E (scores are shown for each candidate, with the implicit ranked preferences in parentheses, and the unscored candidates assumed to be ranked last):
17 A:10 B:9 C:8 (A>B>C >D=E)
17 B:10 C:9 A:8 (B>C>A >D=E)
18 C:10 A:9 B:8 (C>A>B >D=E)
48 D:10 E:10 (D>E >A=B=C)
A, B, and C are preferred by a mutual majority, because a group of 52 voters (out of 100), an absolute majority, scored all of them higher than (preferred them over) all other candidates (D and E).
So the mutual majority criterion requires that one of A, B, and C win the election.
Complying and non-complying methods
Systems which pass
Borda-Elimination, Bucklin, Coombs, IRV, Kemeny-Young, Nanson's method, Pairwise-Elimination, Ranked Pairs, Schulze, Smith//Minmax, Descending Solid Coalitions, Majority Choice Approval, any
Smith-efficient Condorcet method, most Condorcet-IRV hybrid methods
Systems which fail
most rated methods (such as Approval voting, Score voting, and STAR voting), Black, Borda, Dodgson, Minmax, Sum of Defeats
Alternative Definitions
It can be stated as follows:
A mutual majority (MM) is a set of voters comprising a majority of the voters, who all prefer some same set of candidates to all of the other candidates. That set of candidates is their
MM-preferred set.
If a MM vote sincerely, then the winner should come from their MM-preferred set.
A voter votes sincerely if s/he doesn't vote an unfelt preference, or fail to vote a felt preference that the balloting system in use would have allowed hir to vote in addition to the preferences
that she actually does vote.
To vote an unfelt preference is to vote X over Y if you prefer X to Y.
To vote an unfelt preference is to vote X over Y if you don't prefer X to Y.
or more simply,
If there is a majority of voters for which it is true that they all rank a set of candidates above all others, then one of these candidates must win.
A generalized form that also encompasses rated voting methods:
If a majority of voters unanimously vote a given set of candidates above a given rating or ranking, and all other candidates below that rating or ranking, then the winner must be from that set.
Note that the logical implication of the mutual majority criterion is that a candidate from the smallest set of candidates preferred by the same absolute majority of voters over all others must win;
this is because if, for example, 51 voters prefer A over B, and B over C, with the other 49 voters preferring C, then not only is (A, B) a set of candidates preferred by an absolute majority over all
others (C), but candidate A is also a candidate preferred by an absolute majority over all others (B and C), and therefore A must win in order to satisfy the criterion.
It is sometimes simply (and confusingly) called the Majority criterion. This usage is due to Woodall.^[1]
Related forms of the criterion
Stronger forms
The mutual majority criterion is implied by the dominant mutual third property, which itself is implied by the Smith criterion.
Weaker forms
By analogy to the majority criterion for rated ballots, one could design a mutual majority criterion for rated ballots, which would be the mutual majority criterion with the requirement that each
voter in the majority give at least one candidate in the mutual majority-preferred set of candidates a perfect (maximal) score. An even weaker criterion along these lines would be that the mutual
majority must give everyone they prefer a perfect score; Majority Judgment passes this.
Voting methods which pass the majority criterion but not the mutual majority criterion (some ranked methods fall under this category, notably FPTP) possess a spoiler effect, since if all but one
candidate in the mutual majority drops out, the remaining candidate in the mutual majority is guaranteed to win, whereas if nobody had dropped out, a candidate not in the mutual majority might have
won. This is also why Sequential loser-elimination methods whose base methods pass the majority criterion pass the mutual majority criterion.
All Condorcet methods pass mutual majority when there is a Condorcet winner, since if there is a mutual majority set, all candidates in it pairwise beat all candidates not in it by virtue of being
preferred by an absolute majority; since the CW isn't pairwise beaten by anyone, they must be in the set. Smith-efficient Condorcet methods always pass mutual majority.
In contrast to the dominant mutual third set, a mutual majority set is always also a dominant mutual majority set. Every coalition that has majority support also pairwise beats the rest of the
candidates, but that is not true of all coalitions supported by more than 1/3 of the voters.
Dominant mutual plurality criterion
The mutual majority criterion doesn't apply to situations where there are large "sides" if enough voters are indifferent to the large sides. Example:
51 A>C
49 B
10 C(>A=B)
The last line "10 C(>A=B)" should be read as "these 10 voters prefer C as their 1st choice and are indifferent between A and B." Even though candidate A is preferred by the (same) majority of voters
in pairwise matchups against B (51 vs. 49) and C (51 vs. 10), candidate A technically is not preferred by an absolute majority (i.e. over half of all voters), and C would beat A in some mutual
majority-passing methods, such as Bucklin. A "mutual plurality" criterion might make sense for these types of situations where a plurality of voters prefer a set of candidates above all others, and
everyone in that set pairwise beats everyone outside of the set; this mutual plurality criterion implies the mutual majority criterion (because a majority is a plurality, and anyone who is preferred
by an absolute majority over another candidate is guaranteed to pairwise beat that candidate, thus all candidates in the mutual majority set pairwise beat all other candidates). The Smith criterion
implies this mutual plurality criterion (because the Smith criterion implies that someone from the smallest set of candidates that can pairwise beat all others must win, and this smallest set must be
a subset of any set of candidates that can pairwise beat all candidates not in the set). IRV doesn't pass the mutual plurality criterion; example:
15: A1>A2>B
20: A2>B
30: B
20: C1>B
15: C2>C1>B
B is ranked above all other candidates by 30 voters, whereas no other set of candidates is ranked above all others by more than 20 voters. Yet after a few eliminations, this becomes:
35: A2>B
30: B
35: C2>B
and B is eliminated first, despite pairwise dominating everyone else (i.e. being the Condorcet winner). This is an example of the Center squeeze effect.
Semi-mutual majority
If there are some losing candidates ranked above the mutual majority set of candidates by some voters in the majority, this voids the criterion guarantee. Example:
26 A>B
25 B
49 C
Despite B being preferred by an absolute majority over C, and the only candidate preferred by any voters in that absolute majority over or equally to B being A (with no voters in the majority
preferring anyone over A), the mutual majority criterion doesn't guarantee that either A or B must win. It has been argued that to avoid the Chicken dilemma, C must win here (and C would win in some
mutual majority-passing methods, such as IRV, which is often claimed to resist the chicken dilemma), but methods that do so have a spoiler effect, since if A drops out, B must win by the majority
(and thus mutual majority) criterion. All major defeat-dropping Condorcet methods elect B here, since they have the weakest pairwise defeat.
Independence of mutual majority-dominated alternatives
Similar to Independence of Smith-dominated Alternatives, a "independence of mutual majority-dominated alternatives" criterion could be envisioned.
Both instant-runoff voting and Descending Acquiescing Coalitions fail this criterion, as can be shown by Left, Center, Right scenarios when y+z also constitutes a majority.
For instance:
4: L>C>R
3: R>C>L
2: C>L>R
The smallest mutual majority set is {L, C}, and C beats L pairwise, so in any election where those two candidates are the only one in the running, C wins. However, IRV first eliminates C and then L
beats R. DAC first excludes R from the set of viable candidates (because the {L, C} coalition is the largest). Then L has the greatest first preference count of the two and thus wins.
Finding the mutual majority set
Pairwise counting
Note that the mutual majority set is a pairwise-dominating set (every candidate in it pairwise beats every candidate not in it). So one way to find it would be to find the Smith set ranking, and then
look for the smallest group of candidates highest in the Smith ranking who are preferred by a mutual majority, if there is one.
The smallest mutual majority set can be found in part by looking for the Smith set, because the Smith set is always a subset of the mutual majority set when one exists, and then adding in candidates
into the mutual majority set who are preferred by enough of the voters who helped the candidates in the Smith set beat other candidates to constitute a mutual majority. Example:
35 A>B
35 B>A
30 C>B
The Smith set is just B here. When looking at the 70 voters who helped B beat C and the 65 for B>A, it's clear that a majority of them prefer A over C, and that an absolute majority of voters prefer
either A or B over C. So the smallest mutual majority set is A and B.
Bucklin approach
An alternative way to find the smallest mutual majority set would be to use a modified version of Bucklin voting: for each voter, assume they "approve" all of their 1st choices. Find the ballot which
approves the most candidates; for each other ballot, until it approves as many candidates as this "most-approvals" ballot, the most-approvals ballot should be prevented from approving any more
candidates. Once a ballot approves as many or more candidates than the most-approvals ballot, it should be considered the most-approvals ballot instead, and likewise, it should stop approving
additional candidates. For each ballot that is not a most-approvals ballot, approve all candidates at the next consecutive rank where candidates haven't been approved yet for that ballot. Do this
until some candidate(s) are approved by a majority of voters, and then check if all ballots approving each majority-approved candidate do not approve anyone else. If so, then the majority-approved
candidates are the smallest mutual majority set, but if not, then there is no smallest mutual majority set. For example:
17 A>B>C
17 A=B>C
17 C>A>B
49 D>E>F
34 voters approve A as their 1st choice, 17 B, 17 C, and 49 D. The 17 A=B voters approve both A and B, two candidates, making them the most-approvals voters currently, so they are not allowed to
approve any more candidates for now. Adding in the next rank, 17 voters now approve B as their 2nd choice, 17 A, and 49 E. Now 51 voters approve A, so check whether they are a mutual majority. In
this case, the only candidates any of the 51 voters prefer more than or equally to A are B and C; it is seen that all 51 voters prefer any of A, B, or C over all other candidates (D, E, and F), so
ABC is the smallest mutual majority set.
|
{"url":"https://electowiki.org/wiki/Mutual_Majority","timestamp":"2024-11-03T09:09:52Z","content_type":"text/html","content_length":"72282","record_id":"<urn:uuid:f4242d75-af99-4cce-aea2-b42bd14298f3>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00582.warc.gz"}
|
Calculating Power Worksheet - Calculatorey
Calculating Power Worksheet
Understanding Power Calculation
Calculating power is an important concept in physics and engineering. Power is the rate at which work is done or energy is transferred. It is a crucial parameter in various fields, including
electrical engineering, mechanical engineering, and thermodynamics. To calculate power, you need to understand the basic formula and units of measurement.
Power Formula
The formula to calculate power is:
Power = Work / Time
• Power is measured in watts (W)
• Work is measured in joules (J)
• Time is measured in seconds (s)
This formula shows that power is the amount of work done or energy transferred per unit time. It is a measure of how quickly energy is being used or produced.
Calculating Power in Electrical Circuits
In electrical circuits, power can also be calculated using the formula:
Power = Voltage x Current
• Voltage is measured in volts (V)
• Current is measured in amperes (A)
This formula shows the relationship between voltage, current, and power in an electrical circuit. By multiplying the voltage and current, you can determine the amount of power being used or supplied
in the circuit.
Units of Power
Power has various units of measurement, depending on the context of the calculation. The most common units of power are:
• Watt (W): The SI unit of power, equal to one joule per second
• Kilowatt (kW): Equal to 1,000 watts
• Megawatt (MW): Equal to 1,000,000 watts
These units are used to express power in different scales, from small electronic devices to large power plants.
Calculating Mechanical Power
In mechanical systems, power can be calculated using the formula:
Power = Force x Distance / Time
• Force is measured in newtons (N)
• Distance is measured in meters (m)
This formula represents the work done by a force over a distance in a given amount of time. It is commonly used in engineering to determine the power output of engines and machines.
Efficiency and Power
Efficiency is an important concept when calculating power, especially in energy conversion systems. Efficiency is the ratio of output power to input power, expressed as a percentage. The higher the
efficiency of a system, the more output power it can generate from a given input power.
Efficiency can be calculated using the formula:
Efficiency = (Useful Output Power / Input Power) x 100%
This formula helps engineers and researchers optimize the performance of systems by maximizing the output power while minimizing the input power.
Applications of Power Calculation
Power calculation has numerous applications in various fields, including:
• Electrical Engineering: Power calculation is essential for designing and analyzing electrical circuits, generators, and motors.
• Mechanical Engineering: Power calculation is used in designing engines, machines, and mechanical systems.
• Thermodynamics: Power calculation is crucial for understanding energy transfer and conversion in thermodynamic systems.
By accurately calculating power, engineers and researchers can optimize the performance and efficiency of systems, leading to more sustainable and energy-efficient technologies.
Power calculation is a fundamental concept in physics and engineering, essential for analyzing and designing various systems. By understanding the basic formulas and units of measurement, you can
calculate power in electrical circuits, mechanical systems, and thermodynamic processes. Efficiency plays a crucial role in power calculation, helping to maximize output power while minimizing input
power. With the applications of power calculation in various fields, engineers and researchers can develop more efficient and sustainable technologies for the future.
|
{"url":"https://calculatorey.com/calculating-power-worksheet/","timestamp":"2024-11-10T11:53:05Z","content_type":"text/html","content_length":"76879","record_id":"<urn:uuid:3e471382-d675-435a-bd24-9c4a96b356b6>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00489.warc.gz"}
|
Constitutive Models
FLAC3D Theory and Background • Constitutive Models
Constitutive Models
Numerical solution schemes face several difficulties when implementing constitutive models to represent geomechanical material behavior. There are three characteristics of geo-materials that cause
particular problems.
One is physical instability. Physical instability can occur in a material if there is the potential for softening behavior when the material fails. When physical instability occurs, part of the
material accelerates and stored energy is released as kinetic energy. Numerical solution schemes often have difficulties at this stage because the solution may fail to converge when a physical
instability arises.
A second characteristic is the path dependency of nonlinear materials. In most geomechanical systems, there are an infinite number of solutions that satisfy the equilibrium, compatibility, and
constitutive relations that describe the system. A path must be specified for a “correct” solution to be found. For example, if an excavation is made suddenly (e.g., by explosion), then the solution
may be influenced by inertial effects that introduce additional failure of the material. This may not be seen if the excavation is made gradually. The numerical solution scheme should be able to
accommodate different loading paths in order to properly apply the constitutive model.
A third characteristic is the nonlinearity of the stress-strain response. This includes the nonlinear dependence of both the elastic stiffness and the strength envelope on the confining stress. This
can also include behavior after ultimate failure that changes character according to the stress level (e.g., different post-failure response in the tensile, unconfined, and confined regimes). The
numerical scheme needs to be able to accommodate these various forms of nonlinearity.
The difficulties faced in numerical simulations in geomechanics—physical instability, path dependence, and implementation of extremely nonlinear constitutive models—can all be addressed by using the
explicit, dynamic solution scheme. This scheme allows the numerical analysis to follow the evolution of a geologic system in a realistic manner, without concerns about numerical instability problems.
In the explicit, dynamic solution scheme, the full dynamic equations of motion are included in the formulation. By using this approach, the numerical solution is stable even when the physical system
being modeled is unstable. With nonlinear materials, there is always the possibility of physical instability (e.g., the sudden collapse of a slope). In real life, some of the strain energy in the
system is converted into kinetic energy, which then radiates away from the source and dissipates. The explicit, dynamic solution approach models this process directly, because inertial terms are
included—kinetic energy is generated and dissipated.
In contrast, schemes that do not include inertial terms must use some numerical procedure to treat physical instabilities. Even if the procedure is successful at preventing numerical instability, the
path taken may not be a realistic one. The numerical scheme should not be viewed as a black box that will give “the solution.” The way the system evolves physically can affect the solution. The
explicit, dynamic solution scheme can follow the physical path. By including the full law of motion, this scheme can evaluate the effect of the loading path on the constitutive response.
The explicit, dynamic solution scheme also allows the implementation of strongly nonlinear constitutive models because the general calculation sequence allows the field quantities (forces/stresses
and velocities/displacements) at each element in the model to be physically isolated from one another during one calculation step. The general calculation sequence is described in the section of
Theoretical Background. The implementation of elastic/plastic constitutive models within the framework of this scheme is discussed in the Incremental Formulation section.
The mechanical constitutive models available range from linearly elastic models to highly nonlinear plastic models. The basic constitutive models are listed below. A short discussion of the
theoretical background and simple example tests for each model follow this listing.
Section Outline
* Not available in 3DEC.
** Not available in FLAC2D.
Was this helpful? ... Itasca Software © 2024, Itasca Updated: Sep 26, 2024
|
{"url":"https://docs.itascacg.com/itasca920/common/docproject/source/manual/program_guide/models/models.html","timestamp":"2024-11-05T00:07:06Z","content_type":"application/xhtml+xml","content_length":"31233","record_id":"<urn:uuid:c18ae932-20ed-48a8-a511-c01f2f7f9361>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00732.warc.gz"}
|
Bloomberg Is Wealthier Than The Bottom 125 Million Americans
Michael Bloomberg’s net worth is $64.2 billion, according to Forbes. This makes him the eighth wealthiest person in America. Wealth at that scale is hard to comprehend and so it can be useful to
compare it to something else. And if you are trying to highlight the high level of inequality in the country, the best thing to compare it to is the wealth held by those at the bottom of our society.
Every three years, the Federal Reserve releases its Survey of Consumer Finances, which is the best household wealth survey in the country. In the latest SCF data, the bottom 38 percent of American
households have a collective net worth of $11.4 billion, meaning that Michael Bloomberg owns nearly 6 times as much wealth as they do.
The bottom 38 percent of households is equal to around 47.8 million households. Since households have an average of 2.63 members, this is equal to about 125.7 million people. Thus, Bloomberg’s wealth
is nearly 6 times greater than the wealth of the bottom 125 million people combined.
In fact, this 125 million figure actually understates how lopsided things are. The definition of wealth used in the official SCF publications includes cars as wealth. But academics that study wealth
inequality, like Edward Wolff, often do not count cars as wealth because they are rapidly-depreciating consumer durables that most people can’t really sell for the practical reason that they need a
car to get around and live.
When you exclude cars from the definition of wealth, what you find is that the bottom 48 percent of households have less combined wealth than Michael Bloomberg does. This is 60.4 million households
or 158.9 million people.
Regardless of which measure you use, the upshot is clear: the United States is simultaneously home to some of the wealthiest people on earth and to a large propertyless underclass that have scarcely
a penny to their names.
|
{"url":"https://www.peoplespolicyproject.org/2020/02/19/bloomberg-is-wealthier-than-the-bottom-125-million-americans/","timestamp":"2024-11-12T08:42:39Z","content_type":"text/html","content_length":"39695","record_id":"<urn:uuid:5bce1f6b-9e81-4240-941e-8a272102df75>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00371.warc.gz"}
|
Quantitative Aptitude: Time, Speed and Distance Set 3
1. The distance between two towns A and B is 545 km. A train starts from town A at 8 A.M. and travels towards town B at 80 km/hr. Another train starts from town B at 9 : 30 A.M. and travels towards
town A at 90 km/hr. At what time will they meet each other?
A) 11:30 AM
B) 12:30 PM
C) 12:00 Noon
D) 1:00 PM
E) 11:00 AM
2. A bus can travel 560 km in 8 hours. The ratio of speed to train to that of car is 13 : 8. If the speed of bus is 7/8 of the speed of car, find in how much time train can cover 520 km distance.
A) 3 hours
B) 4 hours
C) 6 hours
D) 5 hours
E) 2 hours
3. A person has to travel from point A to point B in car in a scheduled time at uniform speed. Due to some problem in car engine, the speed of car has to be decreased by 1/5^th of the original speed
after covering 30 km. With this speed he reaches point B 45 minutes late than the scheduled time. Had the engine be malfunctioned after 48 km, he would have reached late by only 36 minutes. Find
the distance between points A and B.
A) 120 km
B) 80 km
C) 100 km
D) 150 km
E) 70 km
4. Towns A and B are 225 km apart. Two cars P and Q travel from towards each other from towns A and B respectively and meet after 3 hours. If the speed of P be 1/2 of its original speed and Q be 2/3
of its original speed, they would have met after 5 hours. Find the speed of the faster car.
A) 50 km/hr
B) 40 km/hr
C) 45 km/hr
D) 30 km/hr
E) 60 km/hr
5. From point A, Priya and Bhavna start cycling towards point B which is 60 km away from A. The speed of Priya is 10 km/hr more than the speed of Bhavna. After reaching point B, Priya returns
towards point A and meets Bhavna 12 km away from point B. Find the speed of Bhavna.
A) 40 km/hr
B) 15 km/hr
C) 30 km/hr
D) 20 km/hr
E) 45 km/hr
6. A train crosses 2 men running in the same direction at speeds 5 km/hr and 8 km/hr in 12 seconds and 15 seconds respectively. Find the speed of the train.
A) 30 km/hr
B) 24 km/hr
C) 25 km/hr
D) 35 km/hr
E) 20 km/hr
7. A train which is travelling at 80 km/hr meets another train travelling in same direction and then leaves it 150 m behind in next 20 seconds. Find the speed of the second train.
A) 72 km/hr
B) 53 km/hr
C) 64 km/hr
D) 59 km/hr
E) 65 km/hr
8. In a 500 m race C can beat B by 30 m, and in a 400 m race B can beat C by 20 m. Then in 200 m race A will beat C by how much distance (in m)?
A) 58.2 m
B) 68.4 m
C) 63.5 m
D) 72.8 m
E) 55.2 m
9. 2 towns A and B are 300 km apart. 2 trains start travelling from town A towards town B such that the second train leaves 8 hours late than the first one. They both arrive at town B
simultaneously. If the speed of the faster train is 10 km/hr more than the speed of the slower train, find the time taken by the slower train to complete the journey.
A) 25 hours
B) 22 hours
C) 14 hours
D) 18 hours
E) Cannot be determined
10. A man leaves from point A at 4 AM and reaches point B at 6 AM. Another man leaves from point B at 5 AM and reaches point A at 8 AM. Find the time when they meet.
A) 6:20 AM
B) 6:15 AM
C) 5:45 AM
D) 5:36 AM
E) 5:30 AM
|
{"url":"https://alfabanker.com/quantitative-aptitude-time-speed-and-distance-set-3/","timestamp":"2024-11-06T02:13:25Z","content_type":"text/html","content_length":"109431","record_id":"<urn:uuid:e80ebd87-bb95-439c-866d-22a37e10fe25>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00385.warc.gz"}
|
anychart.enums.AggregationType enum | AnyChart API Reference
Still have questions?
Contact support
[enum] anychart.enums.AggregationTypeImprove this Doc
Aggregation type for table columns.
Value Description Example
average Calculate average value in a group and use it as a value of a point.
first Choose the first non-NaN value in a group as a value of a point.
first-value Choose the first non-undefined value as a value of a point.
last Choose the last non-NaN value in a group as a value of a point.
last-value Choose the last non-undefined value as a value of a point.
list Put all non-undefined values in a group to an array and us it as a value of a point.
max Choose the biggest non-NaN value in a group as a value of a point.
min Choose the lowest non-NaN value in a group as a value of a point.
sum Calculate the sum of values in a group and use it as a value of a point.
weighted-average Calculate average value in a group using other column values as weights and use it as a value of a point.
|
{"url":"https://api.anychart.com/anychart.enums.AggregationType","timestamp":"2024-11-06T01:21:04Z","content_type":"text/html","content_length":"224288","record_id":"<urn:uuid:188ad6df-3a1f-4152-b94d-468411af3fe0>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00242.warc.gz"}
|
Seminars in Theoretical Particle Physics
1. Seminars in Theoretical Particle Physics
Seminars in Theoretical Particle Physics
There are a number of seminar series organised by (or of interest to) the Particle Physics Theory research group.
About Particle Physics Theory seminars
The Particle Physics Theory seminar is a weekly series of talks reflecting the diverse interests of the group. Topics include analytic and numerical calculations based on the Standard Model of
elementary particle physics, theories exploring new physics, as well as more formal developments in gauge theories and gravity. Find out more about Particle Physics Theory seminars.
Next Particle Physics Theory seminar
• Event time: 1:30pm until 3:30pm
• Event date: 6th November 2024
• Speaker: Thomas Stone (Durham University)
• Location: Higgs Centre for Theoretical Physics
About Higgs Centre colloquia
The Higgs Centre Colloquia are a fortnightly series of talks aimed at a wide-range of topical Theoretical Physics issues. Find out more about Higgs Centre colloquia.
Next Higgs Centre colloquium
There are currently no future events of this type scheduled. Please check back later, or subscribe to this event's iCalendar feed to get future events added directly to your calendar.
Edinburgh Mathematical Physics Group (EMPG) seminars
These seminars are organised by the School of Mathematics.
Theoretical Particle Physics event calendar
This calendar shows the Particle Physics Theory seminars, Higgs Centre Colloquia and EMPG seminars.
View the Theoretical Particle Physics event calendar (via Google calendar)
|
{"url":"https://www.ph.ed.ac.uk/particle-physics-theory/seminars","timestamp":"2024-11-02T03:03:18Z","content_type":"text/html","content_length":"25095","record_id":"<urn:uuid:2ee9f00b-a47f-40c8-ba36-6bb43c3ade9c>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00368.warc.gz"}
|
What size wire is needed for 200 amps?
Wire Sizes For 200 Amp Service AWG, American Wire Gauge, is the US standard for sizing electrical wiring. Wiring 200 amp service requires either #2/0 copper wiring or #4/0 aluminum or aluminum clad
How many amps is #2 Thhn good for?
Allowable Ampacities of Insulated Copper Conductors
Conductor Size (AWG/KCMIL) 60°C (140°F) TW, UF 90°C (194°F) TBS, SA, SIS, FEP, FEPB, MI, RHH, RHW-2, THHN, THHW, THW-2, THWN-2, XHH, XHHW, XHHW-2, USE-2, ZW
How many amps is #6 Thhn good for?
#6 THHN is good for 65 amps at 75° C.
How many amps is #4 Thhn good for?
#4 THHN is only rated for 85 amps at 75° C. This would be for MC cable or conductors in a raceway. SE cable in thermal insulation or NM cable would be rated at 60° C which has an ampacity of 70 amps.
What size wire do I need to run 200 amp Service 200 feet?
Per Article 250 of the NEC , The minimum size for a grounding conductor for a circuit protected by a 200 amp breaker is #6 copper or #4 Aluminum.
How many wires do I need for a 200 amp service?
Copper-clad Aluminum It has lower conductivity when compared to copper. You should utilize two American wire gage aluminum wires for 200 amp service.
What kind of Romex is used for outlets?
The following NEC regulations apply to Romex conductors:
Wire Gauge or Type Rated Amperage Common Uses
14-2 Romex 15 A Lighting Circuits
12-2 Romex 20 A Lighting and Outlet Circuits, refrigerator
10-2 Romex 30 A Electric water heater, baseboard heaters
10-3 Romex 30 A Electric Clothes Dryer
How many amps are #2 Good For?
Wire Size & Amp Ratings
1 — 145
Is 8 Thhn good for 50 amps?
No not safe to use a 50 amp breaker. The only time a 50 amp would be ok with #8 woul be if these were individual thhn wires in conduit. You need a 40 amp breaker. If the stove specifically calls for
a 50 amp breaker the wire does need to be replaced in order to use a 50 amp breaker.
Is #6 Thhn good for 60 amps?
For 60 ampere breakers, electricians and professionals suggest using a wire size gauge ranging from 6 AWG to 4 AWG. In particular, a 4 AWG copper cable can hold at least 70 amps of electricity before
giving up. Meanwhile, a 6 AWG copper wire can only hold up to 55 amps before it falters.
|
{"url":"https://ru-facts.com/what-size-wire-is-needed-for-200-amps/","timestamp":"2024-11-14T20:16:02Z","content_type":"text/html","content_length":"52620","record_id":"<urn:uuid:3739d5f7-da93-438b-91f7-b5557071c251>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00710.warc.gz"}
|
Cancellation of interference distortions caused by intermodulation between FM signals on adjacent channels
Cancellation of interference distortions caused by intermodulation between FM signals on adjacent channels
A crosstalk cancellation circuit includes a pair of input terminals to which interference affected FM signals are applied and a pair of output terminals from which distortionless demodulated signals
are delivered. For each transmission channel an envelope detector is provided which is connected to the input terminal to detect the envelope of the FM signal. A closed-loop feedback circuits are
cross-coupled across the output terminals to process the signals thereat with the detected envelopes to derive an offset signal for each channel, which is combined with a frequency-demodulated signal
of the FM signal of each channel.
The present invention relates generally to apparatus for eliminating distortions caused by intermodulation between two frequency-modulated signals on adjacent transmission channels, and more
particularly to such apparatus for CD-4 quadraphonic sound recording and reproducing systems in which the crosstalk between adjacent channels varies in magnitude and phase as a function of time.
In the CD-4 quadraphonic sound recording system the electrical signals obtained from the four microphones, left-front (L.sub.f), left-rear (L.sub.r), right-front (R.sub.f) and right-rear (R.sub.r),
are combined to produce sum signals (L.sub.f +L.sub.r) and (R.sub.f +R.sub.r) and difference signals (L.sub.f -L.sub.r) and (R.sub.f -R.sub.r). A frequency translation of the difference signals are
effected by frequency modulation on a 30 KHz carrier. The frequency-translated FM signal (L.sub.f -L.sub.r) is then combined with the baseband sum signal (L.sub.f +L.sub.r) and recorded along the
left track of a groove and the frequency-translated FM signal (R.sub.f -R.sub.r) is combined with the baseband sum signal (R.sub.f +R.sub.f) and recorded along the right sound track of the groove.
Each of the separate tracks serves as a transmission channel for the frequency division muliplexed (FDM) signals. In the sound reproduction process, the frequency-translated signal on each
transmission channel undergoes frequency demodulation. However, the pickup stylus of a playback system acts as a principal source of crosstalk between the two channels so that intermodulation or
interference occurs through the crosstalk path between the frequency-modulated signals of the separate channels. Furthermore, the magnitude and phase of the crosstalk through such transducers varies
as a function of time.
Crosstalk cancellation circuits have been proposed in the past to compensate for the interference distortion caused by the intermodulation of the signals on adjacent channels. However, the prior art
crosstalk cancellation circuits are not satisfactory because they are incapable of cancelling such magnitude and phase component distortions which vary as a function of time.
The present invention is based on mathematical analyses of intermodulation through crosstalk paths having magnitude and phase shift variations with time. The mathematical analyses have resulted in a
discovery that the distortion components of a frequency-demodulated signal can be cancelled with an offset signal derived from the envelope of the frequency-modulated signal.
An object of the invention is to provide a crosstalk cancelling circuit which includes an envelope detector for detecting the envelope of a frequency-modulated signal of each transmission channel and
generating an offset signal from the detected envelope and cancel the interference distortion contained in a frequency-demodulated signal with the offset signal.
This and other objects, features and advantages of the invention will be understood from the following description when taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a schematic illustration of adjacent channels each containing frequency-modulated audio signals showing crosstalk paths between them;
FIG. 2 is an illustration of a first embodiment of the invention;
FIG. 3 is an illustration of a second embodiment of the invention;
FIG. 4 is a graphic illustration of waveform converters of FIG. 3; and
FIG. 5 is an illustration of the details of the waveform converters.
In FIG. 1 of the drawings, a transducer or pickup stylus of a CD-4 quadraphonic system is represented schmatically by a broken-line rectangle 10 which includes a left channel transducer 12L and a
right channel transducer 12R. These transducers are shown electromagnetically coupled by crosstalk paths 14 and 16. The frequency-modulated signals on the left and right channels of a record groove
designated C.sub.L and C.sub.R, respectively, are applied to the left and right transducers 12L, 12R and through the crosstalk paths 14 and 16 they are distorted in waveform. The output signals from
the transducers 12L and 12R are designated C.sub.l and C.sub.r which contain the input signals plus the crosstalk signal components K.sub.R.sup.j.theta. R and K.sub.L.sup.j.theta. L, respectively,
where K.sub.R and K.sub.L are crosstalk ratios from the right to left and the left to right channels, respectively, and .theta..sub.R and .theta..sub.L represent phase shifts present in the
respective crosstalk paths.
A quantitative analysis of the input and output signals gives the following relations:
C.sub.L =Cos {.omega..sub.c t+f(t)} (1L)
C.sub.R =Cos {.omega..sub.c t+g(t)} (1R) ##EQU1## where, .omega..sub.t is a carrier frequency which is frequency-modulated by left and right modulating signals f(t) and g(t).
Equation 2L indicates that the intermodulation results in a left-channel FM signal having a varying amplitude or envelope distortion which is represented by the following Equation:
E.sub.nVL (t)=.sqroot.1+K.sub.R.sup.2 +2K.sub.R Cos {f(t)-g(t)-.theta..sub.R } (3L)
and a phase component distortion represented by ##EQU2## Likewise, Equation 2R indicates that the envelope distortion of the right channel FM signal is represented by
E.sub.nVR (t)=.sqroot.1+K.sub.L.sup.2 +2K.sub.L Cos {f(t)-f(t)-.theta..sub.L } (3R)
and the phase component distortion is represented by ##EQU3## It will be seen that the envelope distortions are function of crosstalk ratios and as a function of phase shifts in the crosstalk paths.
When the output signals C.sub.l and C.sub.r are frequency-demodulated, the demodulated left and right output signals e.sub.L (t) and e.sub.R (t) are respectively given as follows: ##EQU4## where, f'
(t) and g'(t) are the recovered signals which are identical to signals f(t) and g(t), respectively, if no distortion is contained in the recovered signals.
It will be seen that the denominators of Equations (5L) and (5R) are equal to the square values of the envelopes given by Equations (3L) and (3R), respectively.
FIG. 2 is an illustration of a first preferred embodiment of the invention. The signals C.sub.l and C.sub.r derived from the output of transducer 10 are applied respectively to left and right input
terminals 22L and 22R of a crosstalk cancellation circuit 20. The input signals are respectively frequency-demodulated by demodulators 24L and 24R and applied to analog multipliers 26L and 26R
respectively. The input signals C.sub.l and C.sub.r are also applied to automatic-gain controlled amplifiers 28R and 28L, respectively, and thence to squaring circuits 30L and 30R. Lowpass filters
32L and 32R respectively filter out the high frequency components of the input signals so that the outputs therefrom are respectively the envelope signals E.sub.nVL (t) and E.sub.nVR (t) given by
Equations (3L) and (3R), respectively. Through the squaring and filtering actions of the circuits 30 and 32 of both channels, the signal at the output of lowpass filter 32L is a squared envelope of
the left signal C.sub.l and the signal at the output of lowpass filter 32R is a squared envelope of the right signal C.sub.r as given by the following Equations:
e.sub.nVL (t)=1+K.sub.R.sup.2 +2K.sub.R Cos {f(t)-g(t)-.theta..sub.R }(6L)
e.sub.nVR (t)=1+K.sub.L.sup.2 +2K.sub.L Cos {g(t)-f(t)-.theta..sub.L }(6R)
The squared envelope signal e.sub.nVL (t) and e.sub.nVR (t) are then applied to the multipliers 26L and 26R respectively. Since the outputs from the demodulators 24L and 24R are signals e.sub.L (t)
and e.sub.R (t) given respectively by Equations (5L) and (5R), the outputs from the multipliers 26L and 26R represent the numerators of these Equations, respectively, which are rewritten as follows:
It is observed from Equations 7L and 7R that the first term of each of these Equations is the wanted signal and the second term represents the crosstalk signal, and the third term is the component
resulting from the intermodulation of the frequency-modulated left- and right-channel signals. If each of the cross-talk ratios K.sub.L and K.sub.R is of the order of 1/10, the second terms of
Equations 7L and 7R has a signal level of -40 dB, a value which can be neglected from consideration. Therefore, it is the third terms of these Equations which must be considered for cancellation.
DC blocking capacitors 34L and 34R are provided to block the passage of the DC components of the signals derived from the lowpass filters 32L and 32R, as given by Equations 6L and 6R, respectively,
so that the signals representing the third terms of Equations 6L and 6R are passed through the capacitors to attenuators 36L and 36R where the signal level of these components is reduced to a 50%
level. Thus, multipliers 38L and 38R are fed with a signal representing K.sub.R Cos {f(t)-g(t)-.theta..sub.R }and a signal representing K.sub.L Cos {g(t)-f(t)-.theta..sub.L }, respectively. An adder
44 is connected between output terminals 42L and 42R from which the wanted signals f'(t) and g'(t) will be delivered respectively. The output of the adder 44 is coupled to the multipliers 38L and
38R. Therefore, it will be understood that the output of the multiplier 38L equals to the third term of Equation (7L) and the output of the multiplier 38R likewise equals to the third term of
Equation (7R). A subtractor 40L is provided having its negative input connected to the output of multiplier 38L and its positive input connected to the output of multiplier 26L. Since the output from
the multiplier 26L is represented by Equation (7L), the unwanted third term of this equation is cancelled in the subtractor 40L and the wanted signal f'(t) is obtained at the output terminal 42L. In
the same manner, a subtractor 40R is provided to cancel the unwanted third term of Equation (7R) by the output from multiplier 38R to generate the wanted signal g'(t) at the output terminal 42R.
An alternative method of eliminating the intermodulation distortion will be described. Since the crosstalk ratios K.sub.R and K.sub.L, being assumed to be small, can be neglected from consideration,
Equation 5L and 5R can be rewritten as follows: ##EQU6## where, X(t)=K.sub.R Cos {f(t)-g(t)-.theta..sub.R }and Y(t)=K.sub.L Cos {f(t)-g(t)-.theta..sub.L }.
The second terms of Equations 9L and 9R are the unwanted distortion components, so that these Equations can be further rewritten as follows:
e.sub.L (t)=f'(t)-D.sub.iSL (t) (10L)
e.sub.R (t)=g'(t)-D.sub.iSR (t) (10R)
Likewise, Equations 3L and 3R can also be rewritten as follows:
E.sub.nVL (t).apprxeq.1+K.sub.R Cos {f(t)-g(t)-.theta..sub.R }(11L)
E.sub.nVR (t).apprxeq.1+K.sub.L Cos {f(t)-g(t)+.theta..sub.R }(11R)
FIG. 3 is a schematic diagram of an embodiment which realizes the alternative method of distortion elimination. The input left and right signals C.sub.l and C.sub.r are applied through input
terminals 51L and 51R to FM demodulators 52L and 452R, respectively, so that the output signals from the demodulators are the signals e.sub.L (t) and e.sub.R (t) given by Equations 9L and 9R. The
input signals are also applied through AGC circuits 53L and 53R to envelope detectors 54L and 54R, respectively. Each of these envelope detectors includes a diode 55 and a low pass filter 56
connected in series to generate a negative sign envelope signal -E.sub.nVL (t) and -E.sub.nVR (t). DC blocking capacitors 57L and 57R are connected to the envelope detectors 54L and 54R to pass the
polarity-inverted, high frequency signal components represented by the second terms of Equations 10L and 10R to waveform converters 58L and 58R, respectively. The waveform converter 58L is designed
to exhibit a nonlinear input-output characteristic as shown in FIG. 4 so as to impart a waveform conversion of X/1(1-2X) to the input signal. Likewise, the waveform converter 58R is designed to have
a nonlinear input-output characteristic as shown in FIG. 4 so as to impart a waveform conversion of Y/(1-2Y) to the input signal applied thereto. Each of these waveform converters can be realized by
a circuit as shown in FIG. 5 including a resistor 59 connected in series between the input and output terminals of each waveform converter, and in parallel with a diode 62. Resistors 60 and 61 are
connected in parallel with diode 62, with resistors 60 and 61 being provided at the input and output sides of the resistor 59.
Across the output terminals 66L and 66R is connected a subtractor 63 to provide a difference signal f'(t)-g'(t) to analog multipliers 64L and 64R. Multiplier 64L provides multiplication of the signal
f'(t)-g'(t) with the waveform-converted signal from converter 58L, so that its output represents the distortion component -D.sub.isL (t). This distortion component is applied to a subtractor 67 where
it is combined with the output from the demodulator 52L to cancel the distortion component contained in the demodulator output e.sub.L (t) given by Equation 8L. The output of the subtractor 67 is a
distortionless signal f'(t) and applied to the output terminal 66L. Likewise multiplier 64R provides multiplication of the signal f'(t)-g'(t) with the waveform-converted signal from converter 58R to
generate a signal representing the distortion component -D.sub.isR (t), which distortion component is applied to an adder 68 to cancel the distortion component of the signal e.sub.R (t) supplied from
the demodulator 52R to supply a distortionless signal g'(t) to the output terminal 66R.
It will be appreciated from the above discussion that the interference distortion present in the frequency-demodulated signal is cancelled partly by signals derived from the envelope of the
interference affected FM signal and partly by means of a closed-loop feedback circuit which is cross-coupled with the adjacent channel output terminal.
The effect of the automatic gain control circuits described in connection with the previous embodiments is to compensate for the varying sensitivity of the transducer 10 due to aging or replacement
of thereof with a new one. If the amplitude of the transducer 10 output varies with the transducer's sensitivity, the detected envelope representative signals would have different amplitude which
would result in generating inappropriate compensating signals. Each of the automatic gain control circuits provides higher amplification for input signals having a low average amplitude and smaller
amplification for input signals having a higher average amplitude so that transucer 10 operates as if it has a constant sensitivity irrespective of the aging or other influencing factors.
1. In a sound reproduction system for quadraphonic records having first and second physically separated sound tracks respectively containing first and second signals frequency-modulated on a same
carrier frequency, said system including first and second channels including first and second frequency demodulators for demodulating said first and second frequency-modulated signals respectively
and crosstalk paths between said first and second channels to produce an interference distortion in each of said frequency-modulated signals, apparatus for cancelling said interference distortion
first and second input terminals to which said first and second frequency-modulated signals are respectively applied;
first and second output terminals from which first and second distortionless frequency-demodulated signals are delivered;
first and second means connected to said first and second input terminals respectively for detecting the envelopes of said first and second frequency-modulated signals;
first and second means for eliminating the DC components of the detected envelopes respectively and varying the magnitude of said DC-eliminated envelopes; and
first and second closed-loop feedback circuits cross-coupled between said first and second output terminals including a common arithmetic circuit having its input terminals connected to said
first and second output terminals, said first feedback circuit including a first multiplier for providing multiplication of the output of said common arithmetic circuit and the output from said
first magnitude-varying means and a second arithmetic circuit for combining the output of said first multiplier and the output of said first frequency demodulator and applying its output to said
first output terminal, and said second feedback circuit including a second multiplier for providing multiplication of the output of said common arithmetic circuit and the output from said second
magnitude-varying means and a third arithmetic circuit for combining the output of said second multiplier and the output of said second frequency demodulator and applying its output to said
second output terminal.
2. The apparatus of claim 1, wherein each of said first and second envelope detectors comprises a squaring circuit and a lowpass filter connected in series to generate an output representative of the
squared value of the envelope of a respective one of said first and second frequency-modulated signals.
3. The apparatus of claim 1, wherein each of said first and second envelope detectors comprises a diode and a lowpass filter connected in a series circuit thereto.
4. The apparatus of claim 1, further comprising a first automatic gain control circuit connected between the first input terminal and said first envelope detector and a second automatic gain control
circuit connected between the second input terminal and said second envelope detector.
5. The apparatus of claim 1, wherein each of said first and second means for eliminating the DC components and varying the magnitude comprises a DC blocking capacitor and a variable resistor
connected in a series circuit thereto.
6. The apparatus of claim 1, wherein said first means for eliminating the DC components and varying the magnitude comprises a DC blocking capacitor and a waveform converter having a characteristic
represented by X/(1-2X) wherein X=K.sub.R cos {f(t)-g(t)-.theta..sub.R }, where K.sub.R is a crosstalk ratio of said crosstalk path from said second to first channels, f(t) and g(t) respectively
representing the first and second audio signals of said first and second channels and.theta..sub.R representing a phase shift of said crosstalk path from said second to first channels, and said
second means for eliminating the DC components and varying the magnitude comprises a DC blocking capacitor and a waveform converter having a characteristic represented by Y/(1-2Y) wherein Y=K.sub.L
cos {f(t)-g(t)-.theta..sub.L }, where K.sub.L is a crosstalk ratio of said crosstalk path from said first to second channels, and.theta..sub.L representing a phase shift of said crosstalk path from
said first to second channels, wherein each of said K.sub.R and K.sub.L is smaller than unity.
7. The apparatus of claim 1, wherein said common arithmetic circuit of said feedback circuits comprises an adder.
8. The apparatus of claim 7, wherein each of said second and third arithmetic circuits comprises a subtractor.
9. The apparatus of claim 1, wherein said common arithmetic circuit of said feedback circuits comprises a subtractor.
10. The apparatus of claim 9, wherein said second arithmetic circuit comprises a subtractor and said third arithmetic circuit comprises an adder.
11. In a sound reproduction system for quadraphonic records having first and second physically separated sound tracks respectively containing first and second signals frequency-modulated on a same
carrier frequency, said system including first and second channels including first and second frequency demodulators for demodulating said first and second frequency-modulated signals respectively
and crosstalk paths between said first and second channels to produce an interference distortion in each of said frequency-modulated signals, apparatus for cancelling said interference distortion
first and second input terminals to which said first and second frequency-modulated signals are respectively applied;
first and second output terminals from which first and second distortionless frequency-demodulated signals are delivered;
first and second envelope detectors connected to said first and second input terminals respectively, each including a lowpass filter and a squaring circuit to provide an output representative of
the square value of the envelope of each of said frequency-modulated signals;
first and second multipliers providing multiplication of said first frequency-demodulated signal and said square value of the detected envelope of said first frequency-modulated signal, and
providing multiplication of said second frequency-demodulated signal and said square value of the detected envelope of said second frequency-modulated signal;
first and second DC blocking capacitors for eliminating the DC components of the outputs from said first and second envelope detectors respectively;
first and second means for attenuating the signal level of the DC-eliminated signals to a 50% of the signal level at the input thereof; and
first and second closed-loop feedback circuits cross-coupled between said first and second output terminals and including an adder having first and second input terminals connected to said first
and second output terminals, a third multiplier in said first feedback circuit for providing multiplication of the output of said adder and the output of said first attenuating means, a fourth
multiplier in said second feedback circuit for providing multiplication of the outputs of said adder and said second attenuating means, a first subtractor in said first feedback circuit for
detecting the difference between the output of said first multiplier and the output of said third multiplier and applying the difference representative output to said first output terminal, and a
second subtractor in said second feedback circuit for detecting the difference between the output of said second multiplier and the output of said fourth multiplier and applying the difference
representative output to said second output terminal.
12. The apparatus of claim 11, further comprising a first automatic gain control circuit connected between said first input terminal and said first envelope detector and a second automatic gain
control circuit connected between said second input terminal and said second envelope detector.
13. In a sound reproduction system for quadraphonic records having first and second physically separated sound tracks respectively containing first and second signal frequency-modulated on a same
carrier frequency, said system including first and second channels including first and second frequency demodulators respectively for demodulating said first and second frequency-modulated signals
and crosstalk path between said first and second channels to produce an interference distortion in each of said frequency-modulated signals, apparatus for cancelling said interference distortion
first and second input terminals to which said first and second frequency-modulated signals are respectively applied;
first and second output terminals from which first and second distortionless frequency-demodulated signals are delivered;
first and second lowpass filters connected respectively to said first and second input terminals to detect the envelope of said first and second frequency-modulated signals;
first and second DC blocking capacitors for eliminating the DC components of the outputs from said first and second lowpass filters respectively;
a first waveform converter having a characteristic of X/(1-2X) where X=K.sub.R cos {f(t)-g(t)-.theta..sub.R } wherein K.sub.R is a crosstalk ratio of said crosstalk path from the second to first
channels, f(t) and g(t) respectively representing the first and second audio signals of said first and second channels and.theta..sub.R representing a phase shift of said crosstalk path from the
second to first channels, and a second waveform converter having a characteristic of Y/(1-2Y) where Y=K.sub.L cos {f(t)-g(t)-.theta..sub.L } wherein K.sub.L is a crosstalk ratio of said crosstalk
path from said first to second channels, and.theta..sub.L representing a phase shift of said crosstalk path from said first to second channels wherein each of said K.sub.R and K.sub.L is smaller
than unity; and
first and second closed-loop feedback circuits cross-coupled between said first and second output terminals including a common subtractor having two input terminals connected respectively to said
first and second output terminals, said first feedback circuit including a first multiplier for providing multiplication of the output of said subtractor and the output of said first waveform
converter and a second subtractor for detecting the difference in magnitude between the output of said first demodulator and the output of said first multiplier and applying the difference
representative output to said first output terminal, said second feedback circuit including a second multiplier for providing multiplication of the output of said common subtractor and the output
of said second waveform converter, and an adder for providing summation of the output of said second multiplier and the output from said second frequency demodulator and applying the summation
output to said second output terminal.
14. The apparatus of claim 13, further comprising a first automatic gain control circuit connected between said first input terminal and said first lowpass filter and a second automatic gain control
circuit connected between said second input terminal and said second lowpass filter.
Referenced Cited
U.S. Patent Documents
3936619 February 3, 1976 Sugimoto et al.
3943303 March 9, 1976 Masuda et al.
3985978 October 12, 1976 Cooper
3989903 November 2, 1976 Cooper et al.
Current U.S. Class: 179/1GB; 179/1004ST
International Classification: H04H 500;
|
{"url":"https://patents.justia.com/patent/4204091","timestamp":"2024-11-09T20:48:15Z","content_type":"text/html","content_length":"86246","record_id":"<urn:uuid:d3f270e9-fd9a-4dbd-86e4-3ff143665663>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00220.warc.gz"}
|
Sympathetic Vibratory Physics | Keplers Second Law
"The focal radius joining a planet to the sun sweeps out equal areas in equal times."
II. The line joining the planet to the Sun sweeps out equal areas in equal times as the planet travels around the ellipse.
Astrology Astronomy harmony of the spheres Keelys Contributions to Science Kepler Music of the Spheres Kepler Music Theory Kepler Theory of Harmony Kepler Kepler's First Law Kepler's Second Law
Kepler's Third Law law of etheric compensation and restoration Music Propositions of Astronomy Universal Heart Beat
|
{"url":"https://svpwiki.com/Keplers-Second-Law","timestamp":"2024-11-12T12:09:38Z","content_type":"text/html","content_length":"38287","record_id":"<urn:uuid:46e8c08c-5f5c-4dea-ade8-d18fcf6c79a8>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00129.warc.gz"}
|
The JOUSBoost package implements under/oversampling with jittering for probability estimation. Its intent is to be used to improve probability estimates that come from boosting algorithms (such as
AdaBoost), but is modular enough to be used with virtually any classification algorithm from machine learning. See Mease (2007) for more information.
You can install:
• the latest released version from CRAN with
• the latest development version from github with
The following example gives a useage case for JOUSBoost. This example illustrates the improvement in probability estimates on gets from applying the JOUS procedure to AdaBoost on a simulated data
set. First, we’ll train AdaBoost applied to depth three decision trees, and then we’ll get the estimated probabilities.
# Generate data from Friedman model #
dat = friedman_data(n = 1000, gamma = 0.5)
train_index = sample(1:1000, 800)
# Train AdaBoost classifier using depth 3 decision tree
ada = adaboost(dat$X[train_index,], dat$y[train_index], tree_depth = 3, n_rounds = 400)
# get probability estimate on test data
phat_ada = predict(ada, dat$X[-train_index, ], type="prob")
Next, we’ll compute probabilities by using the JOUS procedure.
# Apply jous to adaboost classifier
class_func = function(X, y) adaboost(X, y, tree_depth = 3, n_rounds = 400)
pred_func = function(fit_obj, X_test) predict(fit_obj, X_test)
jous_fit = jous(dat$X[train_index,], dat$y[train_index], class_func,
pred_func, type="under", delta=10, keep_models=TRUE)
# get probability estimate on test data
phat_jous = predict(jous_fit, dat$X[-train_index, ], type="prob")
Finally, we can see the benefit of using JOUSBoost!
# compare MSE of probability estimates
p_true = dat$p[-train_index]
mean((p_true - phat_jous)^2)
#> [1] 0.05455999
mean((p_true - phat_ada)^2)
#> [1] 0.1277416
Mease, D., Wyner, A., and Buja, A. 2007. “Costweighted Boosting with Jittering and over/under-Sampling. JOUS-Boost.” Journal of Machine Learning Research 8 409-439.
|
{"url":"https://cran.rstudio.com/web/packages/JOUSBoost/readme/README.html","timestamp":"2024-11-04T05:58:03Z","content_type":"application/xhtml+xml","content_length":"11594","record_id":"<urn:uuid:204ca346-21ab-4624-8c2b-08195f3e7969>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00414.warc.gz"}
|
Label function automaticaly, 2d plot
Label function automaticaly, 2d plot
def random_between(j,k):
return a
t = var('t')
p1=plot(y1, (t,-5,5), gridlines=True,color='red')
p2=plot(y2, (t,-5,5), gridlines=True,color='green')
p3=plot(y3, (t,-5,5), gridlines=True,color='orange')
p4=plot(y4, (t,-5,5), gridlines=True,color='pink')
Well, I would like to view full grid, and auto lable functions, is this possible, i have here 3 Books, do i have to look somewhere else?
1 Answer
Sort by ยป oldest newest most voted
The secret sauce is that you can ask legend_label to use the LaTeX representation of the function like so:
p1=plot(y1, (t,-5,5), gridlines=True,color='red', legend_label='$'+latex(y1)+'$')
See this live example.
edit flag offensive delete link more
|
{"url":"https://ask.sagemath.org/question/41482/label-function-automaticaly-2d-plot/","timestamp":"2024-11-09T02:47:04Z","content_type":"application/xhtml+xml","content_length":"53702","record_id":"<urn:uuid:4208f460-2e2e-494e-81ce-fdf0a1c5457c>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00400.warc.gz"}
|
Back to Papers Home Back to Papers of School of Physics
Paper IPM / P / 17205
School of Physics
Title: The integrated Sachsâ Wolfe effect in interacting dark matterâ dark energy models
Author(s): 1. M. Ghodsi Yengejeh
2. S. Fakhry
3. J. T. Firouzjaee
4. H. Fathi
Status: Published
Journal: Phys. Dark Univ.
Vol.: 39
Year: 2023
Pages: 101144
Supported by: IPM
Interacting dark matter-dark energy (IDMDE) models can be taken to account as one of the present challenges that may affect the cosmic structures. In this work, we study the integrated Sachs-Wolfe
(ISW) effect in IDMDE models. To this end, we initially introduce a theoretical framework for IDMDE models. Moreover, we briefly discuss the stability conditions of IDMDE models and by specifying a
simple functional form for the energy density transfer rate, we calculate the perturbation equations. In the following, we calculate the amplitude of the matter power spectrum for the IDMDE model and
compare it with the corresponding result obtained from the Î CDM model. Furthermore, we calculate the amplitude of the ISW auto-power spectrum as a function of multipole order l for the IDMDE model.
The results indicate that the amplitude of the ISW auto-power spectrum in the IDMDE model for different phantom dark energy equations of state behaves similar to the one for the Î CDM model, whereas,
for the quintessence dark energy equations of state, the amplitude of the ISW-auto power spectrum for the IDMDE model should be higher than the one for the Î CDM model. Also, it turns out that the
corresponding results by different values of the coupling parameter demonstrate that ξ is inversely proportional to the amplitude of the ISW-auto power spectrum in the IDMDE model. Finally, by
employing four different surveys, we calculate the amplitude of the ISW-cross power spectrum as a function of multipole order l for the IDMDE model. The results exhibit that the amplitude of the
ISW-cross power spectrum for the IDMDE model for all values of Ï x is higher than the one obtained for the Î CDM model. Also, it turns out that the amplitude of the ISW-cross power spectrum in the
IDMDE model changes inversely with the value of coupling parameter ξ.
Download TeX format
back to top
|
{"url":"https://ipm.ac.ir/ViewPaperInfo.jsp?PTID=17205&school=Physics","timestamp":"2024-11-02T18:57:17Z","content_type":"text/html","content_length":"43169","record_id":"<urn:uuid:7526c49d-86dc-465a-a1db-62d725d0a5b5>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00030.warc.gz"}
|
Three Body Problem
From Scholarpedia
Alain Chenciner (2007), Scholarpedia, 2(10):2111. doi:10.4249/scholarpedia.2111 revision #152224 [link to/cite this article]
(Redirected from
Three-body problem
The problem is to determine the possible motions of three point masses \(m_1\ ,\) \(m_2\ ,\) and \(m_3\ ,\) which attract each other according to Newton's law of inverse squares. It started with the
perturbative studies of Newton himself on the inequalities of the lunar motion[1]. In the 1740s it was constituted as the search for solutions (or at least approximate solutions) of a system of
ordinary differential equations by the works of Euler, Clairaut and d'Alembert (with in particular the explanation by Clairaut of the motion of the lunar apogee). Much developed by Lagrange, Laplace
and their followers, the mathematical theory entered a new era at the end of the 19th century with the works of Poincaré and since the 1950s with the development of computers. While the two-body
problem is integrable and its solutions completely understood (see [2],[AKN],[Al],[BP]), solutions of the three-body problem may be of an arbitrary complexity and are very far from being completely
The following form of the equations of motion, using a force function \(U\) (opposite of potential energy), goes back to Lagrange, who initiated the general study of the problem: if \(\vec r_i\) is
the position of body \(i\) in the Euclidean space \(E\equiv\R^p\) (scalar product \(\langle,\rangle\ ,\) norm \(||.||\)), \[m_i{d^2\vec r_i\over dt^2}=\sum_{j\not=i}{m_im_j}\frac{\vec r_j-\vec r_i}{|
|\vec r_j-\vec r_i||^3}=\frac{\partial U}{\partial\vec r_i},\; i=1,2,3,\;\hbox{where}\; U=\sum_{i<j}\frac{m_im_j}{||\vec r_j-\vec r_i||}\cdot\] Endowing the configuration space \(\hat{\mathcal X}=\{x
=(\vec r_1,\vec r_2,\vec r_3)\in E^3,\; \vec r_i\not=\vec r_j\;\hbox{if}\; i\not=j\}\) (or rather its closure \({\mathcal X}\)) with the mass scalar product \[x'\cdot x''=\sum_{i=1}^3{m_i\langle\vec
r'_i,\vec r''_i\rangle}\] we can write them \[{d^2 x\over dt^2}=\nabla U(x),\] where the gradient is taken with respect to this scalar product. In the phase space \(T^*\hat{\mathcal X}\equiv\hat{\
mathcal X}\times{\mathcal X}\ ,\) that is the set of pairs \((x,y)\) representing the positions and velocities (or momenta) of the three bodies, the equations take the Hamiltonian form (where \(|y|^2
=y\cdot y\)): \[{dx\over dt}={\partial H\over\partial y},\quad {dy\over dt}=-{\partial H\over \partial x}, \quad\hbox{where}\quad H(x,y)={1\over 2}|y|^2-U(x).\]
Symmetries, first integrals
The equations are invariant under time translations, Galilean boosts and space isometries. This implies the conservation of
• the total energy \(H\ ,\)
• the linear momentum \(P=\sum_{i=1}^3m_i\frac{d\vec r_i}{dt}\) (by an appropriate choice of a Galilean frame one can suppose that \(P=0\) and that the center of mass is at the origin),
• the angular momentum bivector \(C=\sum_{i=1}^3{m_i\vec r_i\wedge\frac{d\vec r_i}{dt}}\) (identified with a real number if \(p=2\) and with a vector if \(p=3\)).
If the motion takes place on a fixed line, \(C=0\ ;\) on the other hand, if \(C=0\ ,\) the motion takes place in a fixed plane (Dziobek).
The reduction of symmetries was first accomplished by Lagrange in his great 1772 Essai sur le problème des trois corps, where the evolution of mutual distances in the spatial problem is seen to be
ruled by a system of order 7.
Finally, the homogeneity of the potential implies a scaling invariance:
• if \(x(t)\) is a solution, so is \(x_\lambda(t)=\lambda^{-\frac{2}{3}}x(\lambda t)\) for any \(\lambda>0\ .\) Moreover \(H(x_\lambda(t))=\lambda^{\frac{2}{3}}H(x(t))\) and
\[C(x_\lambda(t))=\lambda^{-\frac{1}{3}}C(x(t))\ ;\] it follows that \(\sqrt{|H|}C\) is invariant under scaling:
\[\sqrt{|H|}C(x_\lambda(t))=\sqrt{|H|}C(x(t))\ .\]
Homographic solutions
A configuration \(x=(\vec r_1,\vec r_2,\vec r _3)\) is called a central configuration if it collapses homothetically on its center of mass \(\vec r_G\ ,\) defined by \(\vec r_G={1\over M}\sum{m_i\vec
r_i}\) (here \(M=\sum m_i\)) when released without initial velocities (such a motion is called homothetic)
This means that there exists \(\lambda <0\) such that \(\sum_jm_j\frac{\vec r_j-\vec r_i}{||\vec r_j-\vec r_i||^3}=\lambda(\vec r_i-\vec r_G)\) for \(i=1,2,3\ ,\) which is equivalent to \(\sum_jm_j\
left(\frac{1}{r_{ij}^3}+\frac{\lambda}{M}\right)(\vec r_j-\vec r_i)=0\ ,\) where \(r_{ij}=||\vec r_j-\vec r_i||\ .\) For non collinear configurations, the two vectors \(\vec r_j-\vec r_i,\, j\ne i\)
are linearly independent and so the coefficients in the last sum must vanish. It follows that \(x^0\) must be equilateral, a result first proved by Lagrange in his 1772 memoir. Collinear central
configurations of three bodies were already known to Euler in 1763: the ratio of the distances of the midpoint to the extremes is the unique real solution of an equation of the fifth degree whose
coefficients depend on the masses.
Central configurations are also the ones with homographic motions, along which the configuration changes only by similarities and each body has a Keplerian motion (elliptic, parabolic or hyperbolic
depending on the sign of the energy), with the same eccentricity \(e\ .\) In the elliptic case, represented in the animations, \(e\in[0,1]\)
• when \(e=1\ ,\) the motion is homothetic;
• when \(e=0\ ,\) the motion is a relative equilibrium: the configuration changes only by isometries. For example, the Moon should be placed between the Earth and the Sun at approximately four
times its actual distance from the Earth in order to be in (unstable) collinear equilibrium, while the Greeks and Trojans are most numerous near the two (stable) positions where they would form
with the Sun and Jupiter an equilateral triangle.
R. Moeckel's handwritten Trieste notes are a very good reference on central configurations and the stability of three body relative equilibria. In 2005, R. Moeckel proved that the Saari conjecture is
true for 3 bodies in \(\R^d,d\ge 2\ :\) the relative equilibria are the only motions whose moment of inertia with respect to the center of mass (that is \(I=|x(t)-x_G|\)) is constant.
Central configurations play a key part in the analysis of
• the topology of the invariant manifolds obtained by fixing the values of the first integrals (Birkhoff, Smale, Albouy, McCord-Meyer-Wang);
• the analysis of total collisions (\(I=|x-x_G|^2\to 0\) as \(t\to t_0\)) and completely parabolic motions (\(K=|y-\frac{dx_G}{dt}|^2\to 0\) as \(t\to\infty\)), where the renormalized configuration
defined by the bodies entering the collision must tend to the set of central configurations (Sundman, McGehee, Saari).
The astronomer's three-body problem: i) the planetary problem
This is the case where one mass is much larger than all the other ones and the solutions one considers are close to circular and coplanar Keplerian motions. The typical problem is the motion around
the Sun (mass \(m_0\)) of the two big planets, Jupiter and Saturn (masses \(\mu m_1,\mu m_2\) of the order of \(m_0/1000\)) which contain most of the mass of the planets in the solar system.
An equation free description of the principal features of the planetary and the lunar problems was given in [Ai][3] by Sir G.B. Airy[4]. Browsing through this book could help some people having a
more friendly view on equations.
Reduction to the general problem of dynamics
When written in Poincaré's heliocentric coordinates [P2], \[X_0=x_0,Y_0=y_0+\mu y_1+\mu y_2,\quad X_j=x_j-x_0,\quad Y_j=y_j,\quad j=1,2,\] where the \(x_j=\vec r_j,\, j=0,1,2,\) are the positions and
where \(y_0=m_0\,d \vec r_0 /dt,\, \mu y_j=\mu m_j\, d\vec r_j /dt,\, j=1,2,\) are the linear momenta, the equations take the form of a perturbation of a pair of uncoupled Kepler problems in \(\R^3\
.\) More precisely, one reduces the translation symmetry by restricting to the value \(Y_0=0\) the total linear momentum and quotienting by translations. After dividing the new Hamiltonian and
symplectic form by \(\mu\) one obtains the following Hamiltonian, defined on \(T^*\R^{6}\equiv \R^{12}\) (\(=(\R^3)^4\ ,\) coordinates \((X_1,X_2,Y_1,Y_2)\)) deprived of the collision set (\(X_j=0\)
or \(X_1=X_2\)) with its canonical symplectic structure: \[H=\sum_{1\le j\le 2}\left(\frac{||Y_j||^2}{2\bar m_j}-\frac{\bar m_jM_j} {||X_j||}\right)+\mu\left(-\frac{m_1m_2}{||X_1-X_2||}+ \frac{Y_1\
cdot Y_2}{m_0}\right)=H_0+\mu R.\] (\(M_j=m_0+\mu m_j\) and \(\bar m_jM_j=m_0m_j\)). It describes an \(O(\mu)\) perturbation of two uncoupled Kepler problems. The solutions of interest to astronomers
are those which stay close to the solutions of \(H_0\) in which the planets describe around the sun circular coplanar motions with the same orientation. When transformed into action-angle coordinates
\((I,\theta)\) for the Kepler problems, the Hamiltonian takes the form \[H(I,\theta)=H_0(I)+\mu R(I,\theta,\mu)\] of what Poincaré called Le problème général de la dynamique. Due to the Kepler
degeneracy, this is a degenerate form of this problem in that \(H_0\) does not depend on all the actions: it depends only on the fast actions \(L_1,L_2\) proportional to the semi major axes, but not
on the slow ones, associated to the eccentricities and inclinations of the Keplerian ellipses.
The secular system
The theory of normal forms (or averaging) allows to describe in second approximation the effect of the perturbation by the so-called averaged (or secular) system. The Hamiltonian of the averaged
system is the function, defined on the space of pairs of non intersecting oriented ellipses \((E_1,E_2)\) of fixed semi major axes, by the average \(\int_{T^2}R\,d\ell_1\,d\ell_2=-\int_{T^2}\frac
{m_1m_2}{||X_1-X_2||}\,d\ell_1\,d\ell_2\) of the inverse distance function of a pair of points \(X_1\in E_1,X_2\in E_2\ .\) Here \(\ell_i\) is the fast angle variable associated to the action \(L_i\
,\) i.e. the mean anomaly of \(X_i\ ,\) proportional to the area swept on \(E_i\) by the ray from the focus to \(X_i\) and hence to the Keplerian time on \(E_i\ .\) Pairs of coplanar circles are
singularities of this secular system and the study by Laplace of the corresponding linearized system gave the first result of (linear) stability of a planetary system (see [Las]), well before the
establishment of the spectral theory of matrices: in this approximation, added to the fast motion of the planets on their respective ellipses (described by the \(\ell_i\)) there is a slow (secular)
precession of the perihelia and the nodes of these ellipses (the slow angles) associated to small oscillations of their eccentricities and inclinations. As the averaged Hamiltonian does not depend
any more on the \(\ell_i\ ,\) the semi major axes of the ellipses do not vary in this approximation (this is Laplace's first stability theorem).
From Lindstedt series to K.A.M.
To go beyond, it is necessary to analyze the fate of the quasi-periodic motions just described under the remaining perturbation. Such perturbed quasi-periodic motions are given formally by the theory
of Lindstedt series, whose existence was proved by Poincaré in the second volume of his epoch-making treatise Les méthodes nouvelles de la mécanique céleste. These series exist only when the
unperturbed Keplerian frequencies are not in resonance (i.e. when their ratio is not a rational number) and they are generally divergent (Poincaré, Méthodes Nouvelles, chapter XIII). The breakthrough
was made by Arnold (1961) who, developing a degenerate version of Kolmogorov's celebrated theorem (1954, the first letter of the K.A.M. acronym, which stands for Kolmogorov-Arnold-Moser), proved the
existence of a set of positive measure of almost planar and almost circular quasi-periodic solutions when the ratio of the masses of the planets to the mass of the Sun is microscopic. Arnold's proof
was complete only for the planar problem. In the spatial case, a new resonance is present: the trace of the linearized secular system is always zero. This fact, which generalizes the opposite motions
of the perigee and the node in the secular system of the lunar problem is true in general for the spatial \(n\)-body problem. This was first noticed by Herman who gave a new proof of Arnold's
theorem, valid in the spatial case for any number of planets. After the death of Herman, this proof was completely written down by Féjoz [F1]. Herman's resonance disappears when one reduces the
rotational symmetry, and in fact P. Robutel had been able to complete Arnold's proof in the spatial three-body case thanks to the use of a computer for checking the non-degeneracy conditions.
Finally, the possibility of writing down long normal forms with the help of computers allows finding more realistic bounds for the masses to which KAM theory applies. Examples of this can be found in
the works of L. Chierchia and A. Celletti on the Sun-Jupiter-Victoria system and those of A. Giorgilli and U. Locatelli on the Sun-Jupiter-Saturn system (KAM Theory in Celestial Mechanics).
The astronomer's three-body problem: ii) a caricature of the lunar problem
The motion of the Moon around the Earth can be considered in first approximation as a Keplerian motion perturbed by the action of the distant Sun. The perturbation here is more important than in the
planetary case and the problem was the object of major works since Newton himself and, over half a century later, Clairaut, d'Alembert and Euler. More and more refined theories were given in
particular by Laplace, Pontécoulant, Hansen, Delaunay, Hill, Adams, Brown,... describing more and more inequalities of the motion. A very nice history of the problem is given in [G] by M. Gutzwiller.
Incidentally, a global study of the secular system reveals how the planetary and lunar problems connect to each other and often have similar properties (see [F2]). Motivated by the work of Hill,
where in first approximation the mass of the Moon is supposed to be zero, Poincaré, followed by Birkhoff, developed the so-called restricted problem, where the motion of the two massive planets,
being unperturbed by the zero mass body, is Keplerian.
The planar circular restricted problem:
In the case when this motion is circular and the zero mass body lies in the same plane, the problem can be studied in a rotating frame which fixes the two massive bodies. In this frame, it is still
defined by an autonomous Hamiltonian with two degrees of freedom, the Jacobi constant. The projection on the configuration plane of the constant energy surfaces define regions of possible motions,
called the Hill regions.
The simplest case:
It occurs when, the Jacobi constant being negative and big enough, the zero mass body (we shall still call it the Moon) moves in a component of the Hill region which is a disc around one of the
massive bodies (the Earth). This fact already implies Hill's rigorous stability result: for all times such a Moon would not be able to escape from this disc. Nevertheless this does not prevent
collisions with the Earth. After fixing the Jacobi constant and regularizing these collisions as elastic bounces accompanied by a slowing down of the motion to keep finite velocity, the equations
take place in a space diffeomorphic to the real projective space \({\R}P(3)\ ,\) obtained by adding to the energy hypersurface (diffeomorphic to an open solid torus \(S^1\times {\R}^2\)) a circle \(S
^1\) at infinity, corresponding to the possible directions of collision.
Poincaré's first return map
If the Jacobi constant is negative and large enough, everything is reduced to the study of the Poincaré first return map \(\mathcal P\) in a two dimensional annulus of section \(\mathcal A\ ,\)
diffeomorphic to \(S^1\times[-1,1]\) (see [Co]). The boundaries of \(\mathcal A\) are the so-called Hill's orbits, almost circular in the rotating frame and all the other orbits cut \(\mathcal A\)
transversely. The annulus is essentially the set of possible positions of the perigees of the solutions in the rotating frame with the given Jacobi constant, the return map sending one position of
the perigee to the next one. For the Kepler problem in an inertial frame, this return map reduces to the identity. For the Kepler problem in a rotating frame, it is an integrable conservative twist
map, which can be described as a family of rotations by an angle which depends in a monotone way on the radius of the circle. For the problem we are studying, it is a non integrable conservative
twist map.
Birkhoff's fixed point theorem, Moser's invariant curve theorem and Aubry-Mather theory prove respectively the existence of periodic motions of long period of the Moon around the Earth in the
rotating frame, of quasi-periodic motions whose perigees have as envelope a smooth closed curve and of motions whose perigees have as envelope a Cantor set (closed curve with infinitely many holes).
It is also possible to prove the existence of "stuttering orbits" as in figure 4, where the sign of the angular momentum changes from time to time and the solution comes arbitrarily close to
Higher values of the Jacobi constant
When the Jacobi constant increases, the components of the Hill regions around the two non zero masses merge, a case closer to the one of the true Moon; this allows transit orbits which link
neighborhoods of the two massive bodies (see [Sz]). In the animation of figure 6, the masses of S and E are respectively 0.9 and 0.1.
Periodic solutions
At the end of paragraph 36 of the first volume of the Méthodes nouvelles one reads Poincaré's famous sentence about periodic (or relatively periodic) solutions: D'ailleurs, ce qui nous rend ces
solutions périodiques si précieuses, c'est qu'elles sont, pour ainsi dire, la seule brèche par où nous puissions essayer de pénétrer dans une place jusqu'ici réputée inabordable. It is still unknown
to day if periodic solutions of the three-body problem are dense in the bounded motions but their importance is unquestioned.
Poincaré's classification
In the planetary problem, Poincaré used various techniques, in particular continuation, to prove the existence of various families of periodic (or periodic modulo rotations) orbits. He defined sorts,
genres, species. In the first sort, the eccentricities of the planets are small and they have no inclination; in the limit where the masses vanish, the orbits become circular, with rationally
dependent frequencies. In the second sort, the inclinations are still zero but the eccentricities are finite; in the limit one gets elliptic motions with the same direction of major semi-axes and
conjunctions or oppositions at each half-period. In the third sort, eccentricities are small but inclinations are finite and the limit motions are circular but inclined. Solutions of the second genre
are called today subharmonics: they are associated to a given \(T\)-periodic solution of one of the 3 sorts and their period is an integer multiple of \(T\ .\) Solutions of the second species are
particularly interesting: in the limit of zero masses, the planets follow Keplerian orbits till they have a close encounter (i.e., in the limit, a collision) and then shift to another pair of
Keplerian ellipses. A full symbolic dynamics of such almost collision orbits has been constructed by Bolotin: it implies the existence of solutions with an erratic diffusion of the angular momentum
and a much slower one of the Jacobi constant.
Numerical exploration
In the twentieth century, extensive search for families of periodic solutions in the restricted 3-body problem was accomplished, first by mechanical quadratures at the Copenhagen Observatory
(Stromgren), later using computers by Hénon at the Nice Observatory, Broucke, and others. The books by Hénon [He] and Bruno [Br] describe both theoretically and numerically the so-called generating
families of periodic solutions of the planar circular restricted 3-body problem (i.e. the limits of the families when the mass of one of the massive bodies tends to zero). An extensive study of the
phase space of the related Hill's problem was completed by Simo and Stuchi. Particularly interesting for mission design are the Halo orbits in the spatial restricted problem, which bifurcate from a
planar Lyapunov family originating from a collinear relative equilibrium. Other much studied special cases are the collinear problem with the remarkable periodic (regularized) solution discovered by
Schubart and the isosceles problem, where one body moves on a line, while the two others, with the same mass, move symmetrically on the orthogonal line (resp. plane in the spatial case)
Stability, exponents, invariant manifolds
Poincaré initiated also the study of the stability of periodic orbits, introducing their characteristic exponents and their stable and unstable manifolds. The famous mistake in his 1889 prize memoir,
where he had thought he had proved stability in the restricted problem, is about the intersection of these manifolds (see [BG]). Let us recall that the collinear relative equilibria are always
linearly unstable; for the equilateral ones, linear stability occurs only when one mass greatly dominates the two others (Routh[5] criterion, see Trieste notes) as in the case of Sun-Jupiter-Trojans
already alluded to. The study of the intersections of stable and unstable manifolds of known periodic solutions leads to the construction of orbits by methods of symbolic dynamics.
Minimizing the action
Solutions of the equations of motion \(\ddot x=\nabla U(x)\) are critical paths of the Lagrangian action \(\int_0^T{\left[\frac{1}{2}|\dot x(t)|^2+U\left(x(t)\right)\right]dt}\ ,\) defined on the
Sobolev space \(\Lambda=H^1([0,T],\hat{\mathcal X})\) of paths with value in the configuration space of the problem. Among these, the minimizers are likely to be the simplest and Poincaré proposed to
look for them in a short note of 1896. Coercivity (i.e., forbidding minimizers at infinity) is achieved by restricting \(\Lambda\) appropriately. Thanks to Tonelli's theorem this insures the
existence of a minimizer with values in the closure \(\mathcal X\) of \(\hat{\mathcal X}\ ,\) that is of a path possibly with collisions. Indeed, Newton's force is weak enough so that the action
remains finite along a path which ends in a configuration where some of the bodies are in collision. Technically, this comes from Sundman estimates \(||\vec r_i-\vec r_j||=O(|t-t_0|^\frac{2}{3}), ||\
dot{\vec r_i}-\dot{\vec r_j}||=O(|t-t_0|^{-\frac{1}{3}})\) for any pair of bodies entering a collision at time \(t_0\ .\) The equilateral homographic solutions (of any eccentricity) are characterized
as action minimizers in their homotopy class among loops of fixed period \(T\) in \(\mathcal X\) (Venturelli). The corresponding relative equilibrium is an action minimizer among period \(T\) loops
with the Italian symmetry \(x\left(t-{T\over 2}\right)=-x(t)\ .\) When the masses are all equal, the symmetries may interchange them. If the symmetry group contains a copy of \({\mathbb{Z}/3\mathbb
{Z}}\) acting by shifting the time by \(T/3\) and circularly permuting the bodies, one gets 3-body choreographies , for example the figure eight solution whose symmetry group is the dihedral group \
(D_6\) of order 12. This last solution, first found numerically by C. Moore, was rediscovered and proved to exist by A. Chenciner and R. Montgomery (see [CM]); C. Simó showed its stability (see
[CelMech]) and C. Marchal discovered its connection to the equilateral relative equilibrium through a Liapunov family of relative periodic solutions (see [Ma]). Animations of the figure eight
solution and other choreographies appear in Bill Casselman's column. The symmetry groups of the planar three-body problem and the corresponding action-minimizing trajectories were classified by V.
Barutello, D. Ferrario and S. Terracini in [BFT].
Global evolution
Lagrange-Jacobi and Sundman
The Lagrange-Jacobi identity \(\ddot I=4H+2U\) and the Sundman inequality \(IK-J^2\ge|C|^2\) are the first tools for the analysis of the global evolution (\(I=|x|^2, J=x\cdot y, K=|y|^2\ ,\) where we
suppose the galilean frame chosen so that the center of mass is fixed at the origin). The first (which implies the virial theorem) is an elementary derivation using the homogeneity of the potential,
the second is a complex Cauchy-Schwarz inequality; when written as \(K\ge \frac{J^2}{I}+\frac{C^2}{I}\) it amounts to a comparison with a two-body problem obtained by computing the part of the
velocity corresponding to homothetic deformation, minorizing the rotational part and forgetting the part which corresponds to deformation of the shape: the existence of a shape for a triangle is
indeed a major difference with the two-body problem.
The shape sphere
Triangles in the plane modulo translations may be identified with points in \(\mathbb{R}^4\equiv \mathbb{C}^2\ ,\) for example by choosing Jacobi coordinates \(z_1=\vec r_2-\vec r_1,\, z_2=r_3-\frac
{m_1\vec r_1+m_2\vec r_2}{m_1+m_2}\ .\) Oriented similarity classes of triangles in the plane are then identified with points in the shape sphere, quotient of the unit sphere \(|z_1|^2+|z_2|^2=1\) by
the Hopf map \((z_1,z_2)\mapsto (|z_1|^2-|z_2|^2,\, 2\bar z_1z_2)\) from \(\mathbb{C}\times \mathbb{C}\) to \(\mathbb{R}\times \mathbb{C}\ .\) A good animation of the way the shape of a triangle
changes along a path in the shape sphere is given in Bill Casselman's column (notice that a stereographic projection has changed the sphere (minus one point) into a plane). See also [Mon2]. The
(unoriented) similitude classes of triangles in \(\mathbb{R}^3\) are in correspondence with a disk (identification of the two hemispheres). Paths in the shape sphere have natural lifts to paths of
three body configurations with zero angular momentum. From a study of the conformal geometry of the shape sphere, R. Montgomery was able to deduce that any bounded zero angular momentum (and hence
planar) solution of the three-body problem suffers infinitely many syzygies (=collinearities) provided it does not suffer a triple collision (see [Mon]).
Two important general results are due to Sundman (with precisions by Birkhoff for the second one) : a triple (=total) collision can occur only if \(C=0\ ;\) more precisely, if \(C\not=0\ ,\) and if
the size \(I\) of the system becomes small enough, one body must escape to infinity. As in the two body problem, the escape is either parabolic (\(|\vec r_i(t)|=O(t^{\frac{2}{3}})\)) or hyperbolic (\
(|\vec r_i(t)|=O(t)\)). Painlevé proved that a singularity, i.e. a time after which a solution cannot be extended is necessarily a collision. In fact, double collisions may be regularized and this
allowed Sundman to prove that if \(C\not=0\ ,\) solutions can be defined by series which converge for all values of a renormalized time, a result which unfortunately does not give any insight into
the nature of these solutions. Near the end of his life, G. Lemaître worked out a nice simultaneous regularization of the double collisions, associated with a 4-fold covering of the shape sphere
ramified at the three double-collision points (see [Le]). Triple collisions are responsible for very complicated behaviour. Studying first the problem on the line (\(p=1\)) and regularizing the
double collisions by bouncing, one sees that if a third body approaches near the center of mass immediately after a double collision, it can be ejected with arbitrarily high velocity (see [Mc]). The
difference with a two-body collision is that now the entry and exit velocities of one body in a small ball around the center of mass can differ by an arbitrarily great amount. Energy conservation is
no obstruction to such behavior since the two remaining bodies are left with a large, negative potential energy.
Final motions
The possible final motions of the system were analyzed by Chazy and later by Alexeiev (see [AKN], [Ma]). Particularly remarkable are the oscillatory solutions whose simplest model was given by
Sitnikov for the spatial restricted problem: the two massive bodies describe almost circular Keplerian orbits in a plane while the zero mass body oscillates on the orthogonal line to the plane
through the center of mass, the lim inf of its distance to it being finite, while the lim sup may be infinite (see [M] for a description of the symbolic dynamics associated to these solutions). For
the planar problem (\(p=2\)), a whole set of complicated solutions was constructed by Moeckel using methods of symbolic dynamics (see [Mo]): the idea is to use the complicated dynamics near a triple
collision, that is with small values of angular momentum; the resulting solutions pass near relative equilibria or near escape solutions in any prescribed order. Technically it amounts to the
existence of transversal heteroclinic solutions between singularities or periodic solutions of a regularized flow. Closer to mission design, heteroclinic solutions in the restricted problem have been
used to save fuel by C. Simó and cooauthors (see [Si]), W.S. Koon, M.W. Lo, J.E. Marsden and S.D. Ross (see [CelMech]),...
Astrophysicists extensively studied the complicated evolution of a binary star under successive parabolic or hyperbolic close encounters with a third star, each time different. This gravitational
scattering plays an important role in the understanding of the evolution of stellar systems. Also much studied is the evolution of an initially bounded triple system with negative energy. The methods
combine mathematical and physical considerations with extensive numerical simulations (see [HH] and [VK]).
The oldest open question in dynamical systems
According to Herman (see [H]), it is to determine if, in the conditions of the planetary case, with in particular \(C\not=0\) (which forbids triple collisions) and after regularization of double
collisions, the non-wandering set of the flow is nowhere dense in an energy hypersurface. An affirmative answer would imply topological instability, the bounded orbits being nowhere dense, even if
their measure can be positive when Arnold's theorem applies.
The glimpse we just had of the complexity displayed by some classes of solutions of the three-body problem seems to indicate -- but does not prove in general -- the non-integrability of the problem.
Indeed, several proofs of non-integrability have been given since the end of nineteenth century; they are in general not easy and rely on somewhat restrictive notions of non-integrability.
Bruns, Painlevé
The non-existence of first integrals algebraic in the Cartesian coordinates of the positions and the momenta other than the ones deduced from those already mentioned is due to Bruns. It holds for any
number \(n\ge 3\) of bodies and any choice of the \(n\) masses. It was later generalized by Painlevé who showed that it is enough to suppose algebraicity only in the momenta.
The result proved by Poincaré in the second volume of Les méthodes nouvelles de la mécanique céleste is of a different nature: it asserts the non-existence of new integrals which are uniform analytic
functions in the elliptic elements and depend analytically on the (small) masses (or even admit a formal expansion in the masses with analytic coefficients). More precisely, Poincaré starts with a
Hamiltonian of the form \(H(I,\theta)=H_0(I)+\mu H_1(I,\theta)+\cdots\) obtained in the study of the planetary problem; the series expansion is made with respect to the small parameter \(\mu\) which
is of the order of the ratio of planetary masses to the mass of the Sun. If \(F=F_0+\mu F_1+\cdots\) is a first integral, the vanishing of the Poisson bracket \(\{H,F\}\) implies constraints on the
Fourier coefficients \(c_m(I)\) of \(F_1(I,\theta)=\sum{c_m(I)e^{im\cdot\theta}}\ :\) these coefficients must vanish each time \(m\) is a resonance of \(H_0\) i.e. \(\sum m_i\frac{\partial H_0}{\
partial I_i}(I)=0\) (hence the importance of periodic solutions). In fact things are more complicated because of the degeneracy of \(H_0\) which depends only on the fast actions. The theorem is a
consequence of the fact, far from obvious, that enough Fourier coefficients of \(F_1\) do not vanish. In contrast to Bruns' result, Poincaré's theorem does not say anything for any given choice of
the masses.
Ziglin, Morales-Ramis
More recently, Ziglin's and Morales-Ramis theories were used to prove the non-existence of additional meromorphic integrals in the neighborhood of well chosen particular solutions: the basic idea
here is to trace the implications of integrability on the structure of the differential Galois group of the variational equations along some explicitly known solution of the equations of motion (see
in particular the works of A. Tsyvgintsev).
Two cases of integrability
• the secular system of the planetary (or lunar) planar three-body problem is four dimensional, hence completely integrable because the angular momentum is a first integral;
• if one replaces the Newtonian potential, inversely proportional to the distance by the Jacobi potential inversely proportional to the square of the distance, a new first integral \(2IH-J^2\) of
the N-body problem exists which was discovered by Jacobi. This implies the complete integrability of the three-body problem on the line with such a potential.
Still simpler than the 4-(and more)-body problem!
At least two important features appear when the number of bodies is greater than three:
• the possibility of superhyperbolic escape velocities and, associated to it, the existence of non-collision singularities, that is collision-free solutions where some bodies escape to infinity in
finite time (Gerver for \(3n\) bodies in the plane with \(n\ge 5\ ,\) Xia for five bodies in space, unknown for 4 bodies apart from the seminal work of Mather and McGehee on the line with double
collisions regularized);
• While in the three-body problem, if \(\sqrt{|h|}|c|\) is big enough, the integral manifold \(H=h<0, C=c\) is not connected (as in the restricted problem, the projection of the components on the
configuration space are called Hill regions, see [Mo]), it becomes connected when the number of bodies increases.
The author is grateful to Rick Moeckel for providing the animations and to the chief editor for turning them slim enough so that they can enter the article; he thanks the following colleagues and the
referees for helping him in various ways, from reading first drafts and correcting mistakes to giving a reference: Alain Albouy, Martin Celli, Jacques Féjoz, 傅燕宁 (Yanning Fu), Rick Moeckel,
Laurent Niederman, Philippe Robutel, Marc Serrero, Nataliya Shcherbakova, Дима Трещев (Dima Treschev), Alexey Tsygvintsev. Thanks to Douglas Heggie and Piet Hut for the permission to reproduce their
figure and to Walter Craig for getting the permission to reproduce figures 4 and 5.
[Ai] Sir G.B. Airy Gravitation: An Elementary Explanation of the Principal Perturbations in the Solar System, McMillan, London (1884)[6]
[Al] A. Albouy Lectures on the Two-Body Problem, in Classical and Celestial Mechanics (the Recife lectures), H. Cabral and C. Diacu ed., Princeton University Press (2002)
[A] V.M. Alexeyev Sur l'allure finale du mouvement dans le problème des trois corps ICM Nice 1970, 2, 893-907
[AKN] В.И. Арнольд, В.В. Козлов, А.И. Нейштадт (V. I. Arnold, V. V. Kozlov, and A. I. Neishtadt) математические аспекты классической и небесной механики УРСС (2002), English translation Mathematical
aspects of classical and celestial mechanics, Springer-Verlag (1997)
[BFT] V. Barutello, D.L. Ferrario, S. Terracini, Symmetry groups of the planar three-body problems and action-minimizing trajectories, (2006), Arch. Rat. Mech. Anal, to appear
[BG] J. Barrow-Green Poincaré and the three-body problem AMS 1996
[Be] В.В. Белецкий (V.V. Beletski) очерки о движении космических тел, Наука (1977), French translation Essais sur le mouvement des corps cosmiques, Mir (1986), English translation Essays on the
Motion of Celestial Bodies, Birkhauser, Basel (2001)
[B] G.D.Birkhoff Dynamical Systems, Chapter IX, AMS 1927
[Be] E. Belbruno Capture Dynamics and Chaotic Motions In Celestial Mechanics, Princeton University Press (2004)
[BP] D. Boccaletti, G. Pucacco Theory of Orbits, 2 volumes, Springer (1996 and 1999)
[Br] А.Д. Брюно (A.D. Bruno) ограниченная задача трёх тел: плоские периодические орбиты, наука (1990), English translation The restricted 3-Body Problem: Plane Periodic Orbits, de Gruyter (1994)
[CelMech] Celestial Mechanics, dedicated to D. Saari for his 60th birthday, A. Chenciner, R. Cushman, C. Robinson, Z.J. Xia ed., Contemporary Mathematics 292, AMS (2002)
[C] A. Chenciner Poincaré and the three-body problem, Séminaire Poincaré (Bourbaphy) XVI (2012) : Poincaré 1912--2012, pages 45-133 [7]
[CM] A. Chenciner and R. Montgomery A remarkable periodic solution of the three-body problem in the case of equal masses, Annals of Mathematics 152, p. 881-901 (2000)
[Co] C. Conley On Some New Long Periodic Solutions of the Plane Restricted Three-Body Problem, Communications on Pure and Applied Mathematics XVI, 449-467 (1963)
[F1] J. Féjoz Démonstration du ``Théorème d'Arnold" sur la stabilité du système planetaire (d'après Michael Herman), Ergodic theory and Dynamical Systems 24, 1-62 (2004)
[F2] J. Féjoz Global Secular Dynamics in the Planar Three-Body Problem, Celestial Mechanics and Dynamical Astronomy 84, 159-195 (2002)
[G] M. Gutzwiller, Moon-Earth-Sun: The oldest three-body problem, Reviews of Modern Physics, vol. 70, No. 2, (April 1998)
[HH] D. Heggie and P. Hut, The Gravitational Million-Body Problem, Cambridge University Press (2003)
[He] M. Hénon Generating Families in the Restricted Three-Body Problem, Springer, vol.1 (1997), vol.2 (2001)
[H] M. Herman Some open problems in dynamical systems, ICM Berlin 1998, II, 797-808
[L] J.L. Lagrange Essai sur le problème des trois corps, 1772, Oeuvres tome 6
[Las] J. Laskar La stabilité du système solaire, in Chaos et déterminisme, Points Seuil
[Le] Chanoine G. Lemaître The Three Body Problem, NASA CR 110 (1964)
[Ma] C. Marchal The Three-Body Problem, Elsevier (1990), Russian translation задача трёх тел, Институт компьютерных исследований, Москва-Ижевск (2004)
[Mar] R. Marcolongo Il problema dei tre corpi, da Newton (1686) ai nostri giorni, Ulrico Hoepli, 1919
[Mc] R. McGehee Triple Collision in the Collinear Three-Body Problem, Inventiones Math. 27, 191-227 (1974)
[Mo] R. Moeckel Some Qualitative Features of the Three-Body Problem, Contemporary Mathematics 81, 1-22 (1988)
[Mon] R. Montgomery Infinitely many syzygies, Arch. Rational Mech. Anal. 164, 311-340 (2002)
[Mon2] R. Montgomery The three-body problem and the shape sphere (2014)
[M] J. Moser Stable and random Motion in dynamical systems, Princeton University Press (1973)
[P1] H. Poincaré Les Méthodes nouvelles de la mécanique céleste, Gauthier-Villars (1892, 1893, 1899)
[P2] H. Poincaré Leçons de mécanique céleste, Gauthier-Villars (1905 1907, 1910)
[SM] C.L. Siegel and J.K. Moser Lectures on Celestial Mechanics, Grundlehren der mathematischen Wissenschaften. Springer Verlag (1971)
[Si] C. Simó Gravitation and Chaos in the Solar System
[Sz] V. Szebehely The Theory of Orbits, Academic Press (1967)
[VK] M. Valtonen and H. Karttunen, The Three-Body Problem, Cambridge University Press (2006)
[Wi] A. Wintner The Analytical foundations of Celestial Mechanics, Princeton University Press 1949
Internal references
See also
Averaging, Aubry-Mather theory, Chaos, Dynamical Systems, Hamiltonian Dynamics, KAM Theory, N-Body Simulations, Normal Forms, Periodic, Central Configurations
|
{"url":"http://www.scholarpedia.org/article/Three-body_problem","timestamp":"2024-11-12T09:23:24Z","content_type":"text/html","content_length":"94512","record_id":"<urn:uuid:f1546eba-c401-4493-abcb-ab9c7a9d862d>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00134.warc.gz"}
|
The package \textsf{NBtsVarSel} provides functions for performing variable selection approach in sparse negative binomial GLARMA models, which are pervasive for modeling discrete-valued time series
with overdispersion. The method consists in estimating the autoregressive moving average (ARMA) coefficients of GLARMA models and the overdispersion parameter with performing variable selection in
regression coefficients of Generalized Linear Models (GLM) with regularised methods. For further details on the methodology we refer the reader to [1].
We describe the negative binomial GLARMA model for a single time series with additional covariates. Given the past history \(\mathcal{F}_{t-1} = \sigma(Y_s, s\leq t-1)\), we assume that \begin
{equation}Yt | \mathcal{F}{t-1} \sim \text{NB}\left(\mut^\star, \alpha^\star\right), \label{eq1} \end{equation} where \(\text{NB}(\mu, \alpha)\) denotes the negative binomial distribution with mean \
(\mu\) and overdispersion parameter \(\alpha\). In (\ref{eq1}), \begin{equation}\label{eq2} \mu_t^\star=\exp(W_t^\star) \textrm{ with } W_t^\star=\sum{i=0}^p\beta_i^\star x{t,i}+Z_t^\star. \end
{equation} Here the $x{t,i}$'s represent the \(p\) regressor variables (\(p\geq 1\)) and \begin{equation}\label{eq3} Zt^\star=\sum{j=1}^q \gammaj^\star E{t-j}^\star \textrm{ with } E_t^\star=\frac
{Y_t-\mu_t^\star}{\mu_t^\star + {\mu_t^\star}^2/\alpha^\star}, \end{equation} where \(1\leq q\leq\infty\) and \(E_t^\star=0\) for all \(t\leq 0\). When \(q=\infty\), \(Z^{\star}_t\) satisfies the
ARMA-like recursion in (\ref{eq3}), because causal ARMA can be written as MA process of infinite order. The vector \(\pmb{\beta}^{\star}\) is assumed to be sparse, \textit{i.e.} a majority of its
components is equal to zero. The goal of the \textsf{GlarmaVarSel} package is to retrieve the indices of the nonzero components of \(\pmb{\beta}^{\star}\), also called active variables, from the
observations \(Y_1, \dots, Y_n\).
We load the dataset of observations \verb|Y| with size \(n=50\) provided within the package.
The number of regressor variables \(p\) is equal to 30. Data \verb|Y| is generated with \(\pmb{\gamma}^{\star} = (0.5)\), \(\alpha=2\), and \(\pmb{\beta^{\star}}\), such that all the \(\beta_i^{\
star}=0\) except for five of them: \(\beta_1^{\star}=1.73\), \(\beta_3^{\star}=0.38\), \(\beta_{17}^{\star}=0.29\), \(\beta_{21}^{\star}=-0.64\), and \(\beta_{23}^{\star}=-0.13\). The design matrix \
(X\) is built by taking the covariates in a Fourier basis.
We initialize \(\pmb{\gamma^{0}} = (0)\) and \(\pmb{\beta^{0}}\) to be the coefficients estimated by \textsf{glm} function:
gamma0 = c(0)
glm_nb = glm.nb(Y~t(X)[,2:(p+1)])
alpha0 = glm_nb$theta
Estimation of \(\pmb{\gamma^{\star}}\)
We can estimate \(\pmb{\gamma^{\star}}\) with the Newton-Raphson method. The output is the vector of estimation of \(\pmb{\gamma^{\star}}\). The default number of iterations \verb|n_iter| of the
Newton-Raphson algorithm is 100.
gamma_est_nr = NR_gamma(Y, X, beta0, gamma0, alpha0, n.iter=100)
## [1] 0.2407588
This estimation is obtained by taking initial values \(\pmb{\gamma^{0}}\) and \(\pmb{\beta^{0}}\), which will improve once we substitute the initial values by \(\pmb{\hat{\gamma}}\) and \(\pmb{\hat{\
beta}}\) obtained by \verb|variable_selection| function.
Variable selection
We perform variable selection and obtain the coefficients which are estimated to be active and the estimates of \(\pmb{\gamma^{\star}}\) and \(\pmb{\beta^{\star}}\). We take the number of iterations
of the algorithm \verb|k_max| equal to 2. We take \verb|min| method, which corresponds to the stability selection method with minimal \(\lambda\), where \verb|threshold| is equal to 0.7 and the
number of replications \verb|nb_rep_ss| \(=1000\). For more details about stability selection and the choice of parameters we refer the reader to [1]. The function supports parallel computation. To
make it work, users should download the package \textsf{doMC}, which is not supported on Windows platforms.
result = variable_selectionresult = variable_selection(Y, X,
gamma.init = gamma0, alpha.init = NULL, k.max = 1, method = "cv",
t = 0.3, n.iter = 100, n.rep = 1000)
## Warning: executing %dopar% sequentially: no parallel backend registered
beta_est = result$beta_est
Estim_active = result$estim_active
gamma_est = result$gamma_est
alpha_est = result$alpha_est
## Estimated active coefficients: 1 3 17 21
## Estimated gamma: 0.2407588
## Estimated alphaa: 2.636391
Illustration of the estimation of \(\pmb{\beta^{\star}}\)
We display a plot that illustrates which elements of \(\pmb{\beta^{\star}}\) are selected to be active and how close the estimated value \(\hat{\beta_i}\) is to the actual values \(\beta^{\star}_i\).
True values of \(\pmb{\beta^{\star}}\) are plotted in crosses and estimated values are plotted in dots.
# First, we make a dataset of estimated betas
beta_data = data.frame(beta_est)
colnames(beta_data)[1] <- "beta"
beta_data$Variable = seq(1, (p + 1), 1)
beta_data$y = 0
beta_data = beta_data[beta_data$beta != 0, ]
# Next, we make a dataset of true betas
beta_t_data = data.frame(beta)
colnames(beta_t_data)[1] <- "beta"
beta_t_data$Variable = seq(1, (p + 1), 1)
beta_t_data$y = 0
beta_t_data = beta_t_data[beta_t_data$beta != 0, ]
# Finally, we plot the result
plot = ggplot() + geom_point(data = beta_data, aes(x = Variable,
y = y, color = beta), pch = 16, size = 5, stroke = 2) +
geom_point(data = beta_t_data, aes(x = Variable, y = y,
color = beta), pch = 4, size = 6, stroke = 2) +
scale_color_gradient2(name = expression(hat(beta)),
midpoint = 0, low = "steelblue", mid = "white",
high = "red") + scale_x_continuous(breaks = c(1,
seq(10, (p + 1), 10)), limits = c(0, (p + 1))) + scale_y_continuous(breaks = c(),
limits = c(-1, 1)) + theme(legend.title = element_text(color = "black",
size = 12, face = "bold"), legend.text = element_text(color = "black",
size = 10)) + theme(axis.text.x = element_text(angle = 90),
axis.text = element_text(size = 10), axis.title = element_text(size = 10,
face = "bold"), axis.title.y = element_blank())
As we can see from the plot, all the zero coefficients are estimated to be zero and four out of five non-zero coefficients are estimated to be non-zero. Moreover, the estimates also correctly
preserved the sign of the coefficients.
[1] M. Gomtsyan. “Variable selection in a specific regression time series of counts”, arXiv:arXiv:2307.00929
|
{"url":"https://cran.itam.mx/web/packages/NBtsVarSel/vignettes/NBtsVarSel.html","timestamp":"2024-11-09T10:26:41Z","content_type":"text/html","content_length":"36329","record_id":"<urn:uuid:b8f7dba0-cf2c-44e9-b2ea-c2f8ea85957e>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00056.warc.gz"}
|
Difference between NPV and IRR - Termscompared
Difference between NPV and IRR
A company that is going to invest money into a new project would certainly want to know the viability or profitability of that project. To appraise the potential performance of an investment project,
companies mostly use four techniques – payback method, accounting rate of return method, net present value method and internal rate of return method. Each of these four techniques evaluates project
performance from a different angle and companies mostly use more than one technique to assess their forth coming projects.
Net present value (NPV) and internal rate of return (IRR) are extensively used measures to appraise investment projects. Unlike simple payback method and accounting rate of return method, NPV and IRR
both take into account the time value of money which make them more reliable and practical investment appraisal techniques for companies. This article defines and explains the difference between NPV
and IRR.
Definitions and meanings:
A project’s net present value (NPV) is defined as the difference between present value of total cash inflow and the present value of total cash outflow over the life of the project.
Present value of a cash flow means the amount of cash due to be received or paid at a future point of time, discounted using an appropriate discount factor which in most cases is the company’s cost
of capital.
The cost of capital of a business is the minimum combined rate of return that its investors (shareholders and creditors) expect from a business.
The projects with positive NPV promise a positive overall cash flow. Such projects are considered desirable and are therefore accepted. The project with a negative NPV indicate a negative overall
cash flow. These projects are considered undesirable and are not undertaken.
The internal rate of return (IRR) is the rate of interest at which the NPV of a project becomes zero. It means that at this interest rate a particular project stands at break-even. The projects whose
IRR is greater than the company’s cost of capital are considered desirable and therefore accepted. Any project whose IRR is below the cost of capital of the company is not desirable and should be
As stated earlier that IRR is a rate at which NPV of a project equals zero, therefore any project that promises an IRR below the cost of capital indicates that its NPV is negative and it should not
be undertaken.
Formula of NPV and IRR:
Let’s explain the computation of NPV and IRR with help of an example:
Julia Private Limited is a chair making company. It has an investment opportunity to invest in a three-year project to make tables with following variables:
Initial Investment for purchase of machinery $300,000
Cash inflow in first year $50,000
Cash inflow in second year $160,000
Cash inflow in third year $230,000
Estimated scrap value of the machinery at the end of project $40,000
Cost of Capital of the company 10%
the NPV of the Project would be:
Year/Cash flow Year 0 Year 1 Year 2 Year 3
Initial Investment ($300,000)
1^st cash inflow $50,000
2^nd cash inflow $160,000
3^rd cash inflow $230,000
Scrap Value $40,000
Net cash flows ($300,000) $50,000 $160,000 $270,000
Discounting at cost of capital (10%) × 1 × 1/1.1 or 0.909 × 1/1.1^2 or 0.826 × 1/1.1^3 or 0.751
Discounted Cash flows ($300,000) $45,450 $132,160 $202,770
NPV = -$300,000 + $45,450 + $132,160 + $202,770 = $80,380
At a 10% cost of capital, the net present value of this project is $80,380. Therefore it should be accepted by the Julia Private Limited.
To calculate IRR for this project, the NPV of the project is calculated using two different discount factors (i.e., two difference costs of capital) and then a rate of return at which NPV would be
zero is estimated. We have already computed the NPV at 10% cost of capital. Now let’s compute a new NPV for the same project using a 20% cost of capital:
Net cash flows ($300,000) $50,000 $160,000 $270,000
Discounting at cost of capital (20%) × 1 × 1/1.2 or 0.833 × 1/1.2^2 or 0.694 × 1/1.2^3 or 0.579
Discounted Cash flows ($300,000) $41,650 $111,040 $156,330
NPV = -$300,000 + $41,650 + $111,040 + $156,330 = $9,020
The project will be accepted as the internal rate of return (IRR) in above computations is greater than the cost of capital of 10%.
Difference between NPV and IRR:
The main difference between NPV and IRR is given below:
1. Outcome value:
The net present value (NPV) technique of investment appraisal shows the estimated net value of return in monetary terms that the project would generate. It considers the discounted value of all the
possible cash outflows and inflows regarding a specific project and then compares the two to get a net positive or negative cash flow known as net present value.
The internal rate of return (IRR) method shows the value of return for a project in percentage terms. If IRR is applied to the cash flows of a project instead of cost of capital, the NPV of the
project would be zero.
2. Basis of decision:
Generally, a project is accepted if its NPV is positive or it shows surplus funds at the end of the project. However, it is possible that a project generates positive cash flows but the business is
still not ready to accept it because the positive NPV does not match the NPV set by the management of the business.
IRR is used to appraise the sensitivity of cost of capital which is used to appraise a project. IRR effectively shows a percentage below which NPV would start to fall negative. Therefore if IRR is
greater than the cost of capital, the project is accepted and otherwise rejected.
3. Assumptions:
The NPV technique assumes that the cash inflows generated by the project are reinvested at the cost of capital of the business. This assumption is somehow realistic because the cost of capital of a
business indicates the risk that a business is already facing in its investments.
The IRR method assumes that the cash inflows are reinvested at IRR or internal rate of return.
4. Cash flow variations:
The NPV calculation accommodates any cash outflows after the initial outflow of cash. If a later cash outflow occurs after the initial cash outflow, it can be discounted at applicable discount factor
or cost of capital and included in the calculation easily.
The IRR calculation is disturbed by the cash outflows that occur after the initial cash outflow because in such a situation its calculations could produce more than one IRR’s which would be
unrealistic. An alternative method of calculation available to overcome this problem is known as the modified internal rate of return (MIRR) method which is beyond the scope of this discussion.
5. Applications:
NPV enables a business to make constructive investment decisions because not only it accounts for whole project life but also takes into account the discounting factor which indicates the minimum
amount of return the investors of business would agree upon or the level of return the business is already earning on its other investments.
IRR is used to indicate the risks attached to the project for which NPV is calculated. As the cash flows increase/decrease by an increase/decrease in the cost of capital of the company, IRR shows the
percentage at which the project will be neither positive nor negative, which would indicate the sensitivity level of the cost of capital of that project.
NPV versus IRR – tabular comparison
A tabular comparison of NPV and IRR is given below:
Outcome Value
In monetary terms. In percentage terms.
Basis of Decision
Project is accepted if NPV is positive. Project is accepted if IRR is greater than the cost of capital.
Cash flows are reinvested at the cost of capital. Cash flows are reinvested at the IRR.
Cash outflow variations
NPV calculation can accommodate for variable cash flows. Simple IRR calculation method do not accommodate variable cash flows. Modified IRR or MIRR calculation can accommodate variable cash flows.
Is used for appraising the outcome of a project. Is used to measure the sensitivity of cost of capital of a company.
Investment appraisal is an integral part of the finance function of a business. Every business raises capital with the intention to invest it to earn profits. Investment appraisal aids the management
of a company in making well-grounded investment decisions based on reasonable assumptions. NPV and IRR are both very effective appraisal tools, however these methods have their own limitations which
must be considered with due care while relying on results generated by the application of these methods.
Accuracy of results of an NPV calculation are limited by the accuracy of variables used in calculations like life of the project, accuracy of cash flows, accuracy of cost of capital etc. Simple IRR
method assumes a linear relationship between the NPV and rate of return of a project if plotted on a graph which is actually curved, but the method provides a reasonable estimate. Additionally, IRR
cannot be used to compare two different investments because it may be the case that one investment has less IRR percentage but is generating more in cash terms than an investment having a higher
percentage IRR but is generating less in terms of cash.
Leave a reply
|
{"url":"https://www.termscompared.com/difference-between-npv-and-irr/","timestamp":"2024-11-12T08:52:18Z","content_type":"text/html","content_length":"74628","record_id":"<urn:uuid:6274c2f7-3172-4b41-81e9-f92d2e0cce5c>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00153.warc.gz"}
|
Need help with statistics homework
Statistics Questions and Homework Answers Tell me more about what you need help with so we can help you best.Customer: It’s just statistics hw. 10 questions.JA: Is there anything else the Tutor
should be aware of?Customer: Nope! Can you also help with statistics homework? I have about 10 problems I need help with, just need the answers, no work shown. Free* Statistics Homework Help -
Advanced Statistics - College Homework Help and Online Tutoring Get online tutoring and college homework help for Advanced Statistics. ... Need Help With Advanced Statistics? ... Online Statistics
Homework Help and Tutors. Statistics Homework Help Online | Do My Statistics Homework This is when they feel the need to look for online statistics homework help. TopAssignmentExperts is one of the
most trusted and renowned statistics homework ... Hire someone to do your Statistics homework/exam for you When you need help with any sort of statistics homework, MyMathGenius.com is the perfect
place to come to for assistance since we have an expert staff which ...
Relevant to maintain them choose to help; students with statistics homework help with your own. Website provides training program were less likely to count,. Affordable homework assistance and
probability for statistics textbooks see the…
Need help with statistics homework - High-Quality Writing… Need help with statistics homework - Get an A+ grade even for the hardest assignments. Give your essays to the most talented writers. Allow
the top writers to do your homework for you. Statistics Homework Help | Stucomp Our highly qualified and experienced experts to complete, the your statistics homework with affordable price and we are
available by 24/7 hours.
Need help with statistics homework - receive a 100% authentic, plagiarism-free paper you could only imagine about in our academic writing service leave behind those sleepless nights writing your
coursework with our custom writing help Use from our affordable custom term paper writing services and get the most from amazing quality
I need help with statistics homework - Custom Essays for ... I need help with statistics homework Keith Loney November 04, 2018 Earn a homework help and math homework helps. Searching for your do you
find answers for those who need help, median and enjoy it takes a homework questions - i have come up. My Geeky Tutor - Statistics Homework Help Online Get Statistics Homework Help. We will provide
you with the best quality Math and Statistics Homework Help online, at any level (high school, College, Theses, Dissertations) and projects involving statistical software (such as Excel, Minitab,
SPSS, etc.) If you need help with Statistics, you have come to the right place! Statistics homework help | Get online help with statistics ...
Buy Homework Online @ Low Prices For Homework Help Services
Need Help Homework Statistics | Best Tutors Help
A student waiting tables at a restaurant near the university felt that customers ten to tip female waiters better that they tip waiters.. To confirm his suspicion, he randomly selected 50 female
waiters and another random sample of 75 male waiters who waited tables at different restaurants near the university and asked them to keep a record of tips received for one week..
I need help with statistics homework - Custom Essays for ... I need help with statistics homework Keith Loney November 04, 2018 Earn a homework help and math homework helps. Searching for your do you
find answers for those who need help, median and enjoy it takes a homework questions - i have come up. My Geeky Tutor - Statistics Homework Help Online Get Statistics Homework Help. We will provide
you with the best quality Math and Statistics Homework Help online, at any level (high school, College, Theses, Dissertations) and projects involving statistical software (such as Excel, Minitab,
SPSS, etc.) If you need help with Statistics, you have come to the right place! Statistics homework help | Get online help with statistics ...
Do My Homework Services You Can Rely On So, they do not fail to search for these experts and pay for homework. When I can’t do my homework, I simply go in search of someone to do my homework for me.
When new students come with the question, can I pay someone to do my homework? The answer I give them is a big yes. You cannot ask your fellow students to help you do your homework. Need Help
Statistics Homework - s3.amazonaws.com Need Help Statistics Homework. There are thousands of online sites offering student and academic help but not every paper writing service is created equal.
Statistics Homework Help | Online Assistance With Your ... Professional Online Statistics Assignment Help. Are you looking for online statistics homework help? Or do you have problems with statistics
and you need help in this field of study? If the answer is yes, then the right please to seek help is at 123Homework.com. The Most Affordable Online Statistics Homework Help - TFTH
|
{"url":"https://articlezmoj.web.app/renuart25464rife/need-help-with-statistics-homework-2616.html","timestamp":"2024-11-06T11:03:45Z","content_type":"text/html","content_length":"20078","record_id":"<urn:uuid:baad0199-7735-4c5e-86e0-65a1ca1510ba>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00680.warc.gz"}
|
Woburn Challenge 2001
Problem 3: Austin Powers III, Act II: Back to the Future
P constructs the time machine outlined in Act I and has traveled back in time to this question (Act I is problem 8). However, there is a slight problem in the programming. The machine works in
binary, but by a LIFO (last-in, first-out) queue for all numbers that it processes. Therefore, when you type in a number (in base 10) corresponding to the time period you wish to travel to, it
converts it to binary (like all computers) but then reads it in LIFO order. Therefore, the last binary digit in the binary representation of the time period (ie. the leftmost digit) is the first one
read and the first digit (the rightmost one) is the last one read.
The net result is that to compensate for P’s inadequate programming skills, you need to do the following: If you wish to go to time period X (0 ≤ X ≤ 1048575), you need to convert X to binary,
reverse the bit order and then convert back to base 10. This final number is the one that you will actually enter into the computer. Making this calculation is your task.
The first line of the input contains T, the number of test cases.
The remaining T lines of the input each contain a number X to be converted (in its original base 10 format).
For each input, output the final number in base 10 (i.e. with reversed bit order) on a separate line.
Sample Input
Sample Output
All Submissions
Best Solutions
Point Value: 5
Time Limit: 2.00s
Memory Limit: 16M
Added: Nov 23, 2008
Languages Allowed:
C++03, PAS, C, HASK, ASM, RUBY, PYTH2, JAVA, PHP, SCM, CAML, PERL, C#, C++11, PYTH3
|
{"url":"https://wcipeg.com/problem/wc01p3","timestamp":"2024-11-02T11:56:31Z","content_type":"text/html","content_length":"10476","record_id":"<urn:uuid:5de06f85-780a-4e31-9c9b-8fdd4b5930b1>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00283.warc.gz"}
|
Scientific calculator - ScienCalc 1.3.7
Developer: Institute of Mathematics and Statistics
software by Institute of Mathematics and Statistics →
Price: buy →
License: Shareware
File size: 0K
OS: (?)
Rating: 0 /5 (0 votes)
Scientific calculator - ScienCalc software represents a convenient and powerful scientific calculator. ScienCalc calculates mathematical expression.It supports the common arithmetic operations (+, -,
*, /) and parentheses.
The program contains high-performance arithmetic, trigonometric, hyperbolic and transcendental calculation routines. All the function routines therein map directly to Intel 80387 FPU floating-point
machine instructions.
Find values for your equations in seconds:
Scientific calculator gives students, teachers, scientists and engineers the power to find values for even the most complex equation set. You can build equation set, which can include a wide array of
linear and nonlinear models for any application:
Linear equations
Polynomial and rational functions
Logarithmic and exponential functions
Nonlinear exponential and power equations
Pre-defined mathematical and physical constants
Understandable and convenient interface:
A flexible work area lets you type in your equations directly. It is as simple as a regular text editor. Annotate, edit and repeat your calculations in the work area. You can also paste your
equations into the editor panel.
Example of mathematical expression:
5.44E-4 * (x - 187) + (2 * x) + square(x) + sin(x/deg) + logbaseN(6;2.77)
History of all calculations done during a session can be viewed. Print your work for later use. Comprehensive online help is easily accessed within the program.
Here are some key features of "Scientific calculator ScienCalc":
Scientific calculations
Unlimited expression length
Parenthesis compatible
Scientific notation
More than 35 functions
More than 40 constants
User-friendly error messages
Simple mode (medium size on desktop)
tags scientific calculator your equations calculator sciencalc you can work area more than equation set values for mathematical expression the program find values
Download Scientific calculator - ScienCalc 1.3.7
Download Scientific calculator - ScienCalc 1.3.7
Purchase: Buy Scientific calculator - ScienCalc 1.3.7
Authors software
Equation graph plotter - EqPlot 1.2
Institute of Mathematics and Statistics
Equation graph plotter software plots 2D graphs from complex mathematical equations.
Scientific calculator - ScienCalc 1.3.7
Institute of Mathematics and Statistics
Scientific calculator - ScienCalc software represents a convenient and powerful scientific calculator.
Nonlinear regression analysis - CurveFitter 1.1
Institute of Mathematics and Statistics
Nonlinear regression analysis - CurveFitter software performs statistical regression analysis to estimate the values of parameters for linear, multivariate, polynomial, exponential and nonlinear
Similar software
Scientific calculator - ScienCalc 1.3.7
Institute of Mathematics and Statistics
Scientific calculator - ScienCalc software represents a convenient and powerful scientific calculator.
Equation graph plotter - EqPlot 1.2
Institute of Mathematics and Statistics
Equation graph plotter software plots 2D graphs from complex mathematical equations.
Info-Calculator 2.1
Taimanov S.
Info-Calculator is a program that can be used for scientific, accounting and other calculations.
ESBCalc Pro - Scientific Calculator 8.1.0
ESB Consultancy
ESBCalc Pro is a Enhanced Win32 Scientific Calculator with Infix Processing, Exponential Notation, Brackets, Scientific Functions, Memory, Optional Paper Trail, Printing, Result History List,
Integrated Help and more.
A+ Calc 2.0
WFW Software
A+ Calc is a scientific calculator application with an edit display that shows all calculations and allows recalculating and editing equations.
Scientific Letter 1.90.00
Scientific Networks Software
Scientific Letter is an original mail program that allows you to create messages with complex equations.
CalculatorX 1.2 build 0418
XoYo Software
CalculatorX is an useful, complex and enhanced scientific calculator that will help you with your calculations.
Systems of Nonlinear Equations 1.00
Orlando Mansur
System of Nonlinear Equations application numerically solves systems of simultaneous nonlinear equations.
DreamCalc Scientific Edition 3.4.0
Big Angry Dog
DreamCalc is a software program which provides a fully featured and convenient alternative to using a separate hand-held calculator when you are working on your PC.
FAS Calculator 2.0.0.1
FreeART Software
FAS Calculator is an expression calculator which allows you to directly enter an expression to be evaluated.
Other software in this category
Eye Relax 1.2
United Research Labs
Everyday so many thousands of people across the globe are working on their computer for hours.
DANCE - the dance patterns database 4.51
Markus Bader
DANCE - the dance patterns database is a very useful utility for those who want to learn how to dance.
Academia 3.0
Genesis Software
Academia is an educational program that can be used by anyone.
Open Book 2.3.4
Aleksei Vinidiktov
Open Book is a useful vocabulary maker utility for Windows 98, Me, XP, 2000, 2003 Server featuring an effective method for memorizing information.
Hormonal Forecaster - Fertility Software 5.2
Brian Frackelton
The Hormonal Forecaster easily charts fertility and ovulation to help you avoid or achieve pregnancy and conception by charting the most fertile days of a woman`s menstruation.
|
{"url":"https://shareapp.net/scientific-calculator---sciencalc_download/","timestamp":"2024-11-11T07:20:46Z","content_type":"application/xhtml+xml","content_length":"25568","record_id":"<urn:uuid:ec631759-35a6-4ffd-9082-850d7dea82ce>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00378.warc.gz"}
|
The SATisfying Physics of Phase Transitions
For the past few months, I’ve been thinking about the following equation.
\(A\vec{x} = \vec{b}.\) Specifically, I’ve been wondering about the possible configurations $\vec{x}$ that solve this system of equations.
If you’ve taken a linear algebra course, you will remember (okay, maybe you will remember) that this equation can have no solutions, one solution, or even infinitely many solutions. To get a handle
on these three cases, think about a system of equations of two variables. We can plot the cases as lines in the plane (let’s imagine we only have two equations for now).
If we want to apply an algorithm to find the number of solutions, the way to go is with Gaussian elimination. This is perhaps the most practical thing you learn in a linear algebra class, but of
course it’s not something that you should be doing by hand. Instead, we can write some code so that any matrix we feed in will give us this information.
Okay, I didn’t tell you the full story.
While I have been thinking about the above equation, I’ve neglected to append an important part to it. Instead, the equation I’ve been thinking about is: \(A\vec{x} = \vec{b} \mod{2}.\) The rules for
manipulating equations and matrices are the same, but the difference is that now we’re working over the binary field. That’s a fancy way of saying that algebraic manipulations obey: \(0 + 0 = 0, \\ 0
+ 1 = 1, \\ 1 + 0 = 1, \\ 1 + 1 = 0.\) Plus, any time we have a number, we’re allowed to divide it by two and take its remainder. So $3 = 1 \mod{2}$ and $18 = 0 \mod{2}$. Essentially, this encodes
whether the number you’re dealing with is even or odd.
In the binary field, we don’t have to worry about negative numbers because these can always be converted to either zero or one. In fact, everything we deal with will either be zero or one. So no
fractions, no irrational numbers. Only ones and zeros.
This simplifies things quite a bit, but still leaves us enough structure to have some fun.
Instead of having finitely many or infinitely many solutions, we now only have finitely many (including zero). The nature of modular arithmetic and only using zeros and ones means we’ve reined in
that pesky infinity.
The number of possible solutions can be calculated directly. Because we’re dealing with binary entries in all of our objects, our vector $\vec{x}$ will have exactly $2^N$ possible configurations,
where $N$ is the number of variables (each component has two choices, so you get $2\times2\times\ldots\times2 = 2^N$).
To recap, we start with the equation: \(A\vec{x} = \vec{b} \mod{2}.\) The matrix $A$ is an $M \times N$ binary matrix, $\vec{x}$ is an $N$-component binary vector, and $\vec{b}$ is an $M$-component
binary vector.
Then, we’re going to ask the following question:
What happens to the solution space on average as we increase $M$?
This question will carry us from linear algebra, to theoretical computer science, and finally to statistical physics.
When posed any question that uses the word average, you should reply, “What ensemble are you using?”
Seriously, the ensemble you choose determines everything. Imagine I told you that the average person loves running 100+ kilometres every week. This would probably seem pretty strange. Except the
people I asked were all long-distance runners who have been doing this for years.
The ensemble I chose (experienced long-distance runners) informed the sort of average I would then calculate.
In the exact same way, if we want to answer a question probabilistically in mathematics, we should be careful about our assumptions and how we define our ensemble.
Here’s a recipe for drawing a sample from an ensemble:
• Choose the number of variables $N$.
• For each row, choose three columns to contain a one, and set the rest to zero.
• Choose the components of the vector $\vec{b}$ at random from $\lbrace 0, 1 \rbrace$.
• If any row is repeated, resample until it’s different.
After this procedure, you will find yourself with a matrix $A$ and a vector $\vec{b}$. You can then plug this into your favourite tool to solve binary matrix equations, and see what comes out!
Before we start averaging, let’s think about what an equation will do to the configuration space.
At first, we have no equations, so every configuration is a solution. As we saw before, there are $2^N$ of them. After we insert one constraint (a row of $G$), this will specify the parity of the
sum of three variables. It doesn’t matter what variables we look at, or how many of them. The parity can be odd or even, so half of the configurations will remain, and the other half are tossed out.
The diagram below shows how imposing the parity to be zero selects four configurations, leaving the other four to be discarded.
This happens only when $M$ is small. However, as you make $M$ larger, there are more rows of the matrix which can “interact”. In the language of linear algebra, the rows will eventually become
linearly dependent, at which point the solution space will stop being chopped in half for each extra row, but will decay more slowly.
Different Disguises
I began this essay by talking about linear algebra, but this problem pops up in many different fields of science.
In theoretical computer science, this goes under the name of $k$-XORSAT, where $k = 3$ in our case. This is a satisfiability problem, which means you have a bunch of variables (the vector $\vec{x}$
from above) and then you have constraints (the matrix $G$ along with the parity vector $\vec{b}$), and the question is if you can find a solution to the problem. Answering this is then the same as
performing Gaussian elimination and determining if a solution exists.
In statistical physics, we call it the $p$-spin model, where $p = 3$ in this case. Instead of a binary variables, we map them to $\pm 1$, which are the values of the particles’ spins. For any
variable $x_i$, the spin variable is $s_i = (-1)^{x_i}$. The matrix $G$ tells us how the particles interact (the constraints), and the vector $\vec{x}$ is the spin configuration of the system. We can
then define a Hamiltonian (think of this as an energy function), which has its lowest energy when all interactions are satisfied. Finding a solution to in the $p$-spin language is about finding a
ground state of the system.
The Phase Transition
Hopefully you’ve thought about what happens to the presence of solutions as we increase $M$. Actually, it’s better to talk about the parameter $\alpha \equiv M/N$, since this takes into account how
big the system is (the larger the number of variables, the more equations you should be able to add before completely constraining everything). Once we’ve done this, we can ask what the probability
of having a solution looks like as a function of $\alpha$. And when I say the word “probability”, I’m thinking of looking at many samples of a matrix $G$ and parity vector $\vec{b}$, and then
Here’s what it looks like, as an animation:
It’s a drastic change. Either you will certainly have a solution, or you won’t. There’s a very brief transition period where the probability goes from one to zero, but this becomes smaller and
smaller as $N \rightarrow \infty$.
To me, I can’t help but yearn for an explanation for why this is happening.
Here’s one perspective. As you add more and more vectors (rows) to your matrix $G$, there will come a point where some of these rows will become linearly dependent. At this point, the values of
the parities for these rows will become super important. If they aren’t set the right way, there will be no solution (since there will be a contradiction that can’t be resolved). And since $\vec{b}
$ is chosen uniformly at random, there will often be no solution. Average this over a bunch of matrices, and you will get a curve like the animation above.
Where should this happen? An upper bound that seems reasonable to me is $\alpha_c = 1$, since that is where a matrix can become full rank, and therefore adding more rows makes them linearly
dependent. However, since we’ll likely start having dependent rows sooner, the threshold will be lower than that.
There’s also a perspective related to graph theory. It has to do with a notion of hyperloops, but this will take us a bit further than I want to go. If you want to learn more about it, see Endnote 1.
But here’s the catch: If we change the equation to $A\vec{x} = \vec{0}$, we still get a phase transition like the one above. The difference is that now the number of solutions is always at least one
(since $\vec{x} = \vec{0}$ is always a solution), so the transition is picked up with a different measure.
And that measure is found by taking the statistical physics perspective.
Who Ordered That?
Order, symmetries, and large-scale structures are the bread and butter of statistical physics. We want to go away with the pesky details, and instead focus on the big picture.
The concept of magnetization is one way to measure something about the system as a whole. Like a person who just learned about hammers and now sees everything as a nail, magnetization is used in many
contexts. But at its heart, magnetization is a way to measure similarity.
Imagine we have a bunch of particles which can take spin values of $s_i = \pm 1$, where $i$ is just the label for a particle.
Many models we have in physics have rules that ask for the spins to align. These often go by the name of Ising models, and they are the iconic models for statistical physics. The Ising model usually
involves some system of spins, with neighbouring interactions. The spins “want” to align, but when the temperature of the system is high, they have energy to not align. In fact, the spins will
basically be distributed randomly. In two-dimensions, the model undergoes a phase transition as you lower the temperature. The spins go from being randomly distributed to being either all pointing up
($s_i = +1$) or down ($s_i = -1$). This happens at a critical temperature called $T_c$.
The measure we can look at here is the magnetization of a sample, which is just the average of the spin values: \(m = \langle s_i \rangle.\) If $m = \pm 1$, this means the spins are all aligned
(either up or down). Whereas if $m = 0$, then there are an equal number of spins that are up and down.
For an absolutely marvelous discussion about magnetization (with beautiful animations), I highly, highly, highly recommend Brian Hayes’s latest essay, “Three Months in Monte Carlo”. If you look at
Figure 2, you will see the phase transition as a function of the temperature. I’ve drawn what it looks like below.
We need to be careful when interpreting this diagram. It’s what happens for one sample of an Ising model. As you decrease the temperature, the sample will “decide” to either go towards $m = +1$ or $m
= -1$. This happens differently depending on the particular random details of a sample. The problem is that if you start averaging your value of $m$ over many samples, you will find the whole curve
is flat at $m = 0$. This happens because the samples going to $m = \pm 1$ as we cool the system will cancel each other out in the averaging.
Combatting this is straightforward: Instead of plotting $m$, we can plot $m^2$ or $\lvert m \rvert$. Just keep this in mind if you’re trying to implement a simulation like this in code and are
getting confused as to why you don’t see the transition.
Let’s take a look at how the magnetization changes as we increase $\alpha$.
It turns out that for our purposes, plotting just $m$ is fine. However, because of a technical aspect of the sampling, it will be better to plot something slightly different: \(q = \langle \left[s_i\
right]^2\rangle.\) This is called the spin glass order parameter (see Endnote 2). There are a few layers of abstraction here, so let’s break them down.
The square brackets are used to find the magnetization per site. Concretely, they indicate an average taken over the same site for different solutions to the equation $A\vec{x} = \vec{0}$. This
result will always be between $\pm1$.
If you get a result of $\left[ s_i \right] = \pm 1$, this indicates that all configurations have the same value for this variable. On the other hand, $\left[ s_i \right] = 0$ tells us that there is
no tendency for variable $i$ to be either value.
So while there are three values that describe the “extremes”, really there are two extremes: every configuration takes the same value for a given variable, or there is no preference.
To capture this numerically, we can simply square the result. If there is no tendency for a variable to be one value or the other, this will still be true after squaring. But now we won’t
discriminate between variables which have been “magnetized” to $+1$ versus $-1$.
The average in angular brackets now tells us to average this overlap quantity over all the variables. This makes it super simple for plotting purposes, since we have just one number. However, you
could forego the final average and just plot the average spin-glass order parameter as a probability distribution over the different variables (giving you a histogram-like result).
To recap, there are three averages being done:
1. An average over configurations.
2. An average over variables.
3. An average over samples.
If you picture these configurations as arrays, it looks like this for a given sample:
When calculating the order parameter, we have to look at many matrices $A$ for the curve to smoothen out like in the animation. (I promise we’re now done with the averaging!)
If you carry all these steps out, then you will be rewarded with a beautiful plot like this.
And there’s the transition! So the order parameter picks this up quite nicely. (Note that this is for $N = 100$, )
After the critical threshold $\alpha_c$, the variables all become magnetized, gravitating towards the solution $\vec{x} = \vec{0}$. The interpretation of the transition is the following: All of
the remaining solutions after the threshold “cluster” around the solution $\vec{x} = \vec{0}$.
Where exactly is the transition located? It turns out to be approximately at $\alpha_c \approx 0.918$, which is the in specific case of $p = 3$ for our model. I won’t go through the details of how to
find it here, but check the References for the paper that shows this.
To get a feel for the transition, I wanted to make an animation focusing on one specific sample. It’s a 900-variable model, and I’ve arranged it in a 30x30 square for convenience. Don’t take too much
stock in the arrangement of the variables, which are shown as squares. Remember, the interactions can occur between any three of the variables, so this is just for show. I’m plotting the order
parameter $q_i$ at each site, which is the same as the equation for the order parameter $q$ without the final averaging over all the sites (the angular brackets). Watch for transition, which occurs
at $\alpha_c \approx 0.918$.
Dramatic, no?
Notice how the variables are mostly undecided between values, indicated by their lighter colour. However, as we approach the critical threshold, there is suddenly an influx of darker squares, and it
soon takes over the board.
Watching this is very satisfying. It gives me a sense of what’s happening in the transition, with each site being free to do whatever it once until the threshold is reached. At that point, there is a
force pushing all of the variables to match each other between configurations, until we have only one solution left. (There are still multiple solutions in the animation above, since not all squares
become dark. This is because I didn’t go to a higher value of $\alpha$.)
When you keep adding more rows to your matrix $G$, the moral of the story is that you will eventually reach a critical point where all of the solutions have the same values for most of the variables.
This is just the tip of the iceberg when it comes to satisfiability problems. These are problems with constraints and a vector $\vec{x}$ that attempts to satisfy them.
The setting of $k$-XORSAT is nice because we can do large simulations through Gaussian elimination, which is an efficient algorithm. In fact, the existence of Gaussian elimination is what makes
$k$-XORSAT a problem that’s in the complexity class P. Many other satisfiability problems are in NP, but they often show the same sort of phase transition. Since we can analytically wrap our heads
around $k$-XORSAT though, I figured this would be a good starting point for the curious learner. Plus, the statistical physics version of the model is easy to think about, making it an attractive
starting point.
What began as a simple question about solutions to a matrix equation lead to a discussion about statistical physics, magnetization, and the nature of Gaussian elimination. The phase transition tells
us something really specific about the average behaviour of these systems. You may have thought that all matrices are their own beasts, but it turns out that they can be remarkably similar in their
behaviour. The perspective of phase transitions teaches us that abstracting away from the particulars can lead us to simple explanations of complex phenomena.
Thank you to Grant Sanderson of 3Blue1Brown, James Schloss of LeiosOS, and everyone else who made the Summer of Math Exposition happen! All of us in the community appreciate what you’ve done.
1. The notion of hyperloops and how they affect the $p$-spin model can be found in Section V of this paper. I will warn you though: the diagrams are so old that they are very bad. Honestly,
understanding the notion of a hyperloop gave me a headache looking at Figure 1. I might have to write an essay on this idea just to give people a better introduction!
2. The spin glass order parameter is not consistently defined if you scour the literature. The idea is to have something that captures the overlap of different solutions (sometimes called replicas).
You can look at only two replicas, or many. Here I used many, but the effect is robust with two (though it takes more samples to get the curve to smooth out for the averaging).
3. Some technical tidbits. Because the number of solutions is exponential in $N$, I couldn’t do the full averaging over solutions that would be required for the animations and plots. Instead, I
made a compromise. I set the number of sampled solutions to be the minimum between 100 and the number of remaining solutions. This mean that I would look at 100 configurations for every matrix at
a given $\alpha$, unless the total number of solutions was fewer than this, in which case I took them all. This makes things a bit more memory efficient, and shouldn’t affect the results much.
1. I already mentioned Brian Hayes’s essay “Three Months in Monte Carlo”, but I would recommend all of his essays on Bit-Player if you’re the type of person who loves reading about computation.
2. A recent two-part blog post on the theory of replicas can be found here on the wonderful blog Windows on Theory. In the post I’ve linked to, they briefly discuss the satisfiability phase
transition, as well as work out some of the pesky integrals needed to analyze the behaviour of these ensembles. The post is about machine learning, so this gives you an idea of how broad these
ideas are!
3. The paper “Alternative solutions to diluted p-spin models and XORSAT problems” gives a more theory-based overview of what I covered in this essay. In particular, the paper shows how to derive the
threshold $\alpha_c$ by finding the solution to a transcendental equation (in the paper, it’s Equation 42).
|
{"url":"https://jeremycote.net/phase-transitions","timestamp":"2024-11-14T07:49:26Z","content_type":"text/html","content_length":"26181","record_id":"<urn:uuid:0c730b0a-5755-4e9c-9f4d-df700ffa9a99>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00115.warc.gz"}
|
DrGeo: A Math Teaching Toy that Physics Teachers Will Also
DrGeo: A Math Teaching Toy that Physics Teachers Will Also Love
My other writings about Dr. Geo:
drgeo is a wonderful geometry teaching toy. Do you remember being told that one can draw figures and do many interesting things using a ruler and a pair of compasses alone? Well, drgeo does not say
that in words -- it simply lets you experience it directly through interaction! Not only can you create geometric objects using ruler and compasses, but also you can drag the independent objects
around and see the rest of the picture dance according to your construction rules.
This thing is the most fun software I've ever played since around 10 years ago when I quit games. And it has the same effect of keeping me from doing serious work I am supposed to do :-) If only I
had this to play with in my youth, my physics -- not only mathematics -- would be better.
And that's one important side point I want to make using this tutorial: drgeo is extremely useful not only for mathematics teachers, but also for physics teachers. Two of the examples are
specifically made for physics teachers, while hopefully mathematics teachers may find them just as useful for learning drgeo.
We assume that the readers of this tutorial are knowledgeable in high school mathematics, feel comfortable playing with unfamiliar software, and are willing to look up short passages in the official
manual pages when necessary. The readers are not expected to be familiar with GNU/Linux, nor are they expected to read the official manual pages in its entirety, either before or after reading this
tutorial. (I never did.) The pace will gradually grow faster as we go along. In order not to interrupt the flow of thoughts, some general construction tips are summarized in the last section.
So please grab a copy of the freeduc cd, boot your computer from the cdrom drive, and start enjoying drgeo by following (or not following :-) the steps as you read this tutorial! Don't be afraid to
experiment wild things with drgeo. It does not eat your computer. (Well at least it did not eat mine.) Please be ready to check the official manuals as we shall use short terms like "Point Tools
menu, then the Free Point menubutton" and expect you to follow the "HTML format" link in the manual page, then the "Basic Functions" link, then the "Construction Tools" link, then the "Point Tools"
link, then the "Free Point" link, and read the more detailed explanation. The contents page is especially handy for quick browsing and can save you a few intermediate clicks. You might want to
download the newest "drgeo-doc-*.tgz" file from the sourceforge page for easier off-line reference.
(a, b) < == > a x + b y = 1
transforms a point to a line (actually to a hyperplane in higher dimensions), and vice versa. A few results in the textbook are clearly visualized in the .fgeo example circ_dual.fgeo:
• The dual line A' of a point A can be constructed by joining the two tangent points of the supporting lines from A to the circle.
• As the point approaches the center of the circle, the dual line approaches the infinity, and vice versa.
• The dual of the line C' joining two points A and B, is the intersection of the two dual lines A' of A and B' of B.
Drag A and/or B around to see these effects.
Now let's take a look at the hidden intermediate steps. From the Other functions menu choose the Edit Object Styles menubutton and suddenly all intermediate construction steps are shown. Dashed lines
as well as some points (difficult to tell visually) are hidden objects not shown in the usual working mode. Say we want to be able to change the size of the unit circle. We need to bring the point R
back to normal state from the hidden state. So click on R to bring up the dialog, check "Not masked" under visibility, and close the dialog. Then click the Move Object menubutton and all objects in
the intermediate steps except R are hidden again.
Next you will construct this figure from scratch. Select "File" "New" "Figure" to start a new figure. Just in case you feel it necessary, the small tabs at the bottom can be used to switch between
the original figure and your figure.
1. Create a free point as the center of the unit circle. (Point Tools then Free Point)
2. You may give the point a name now, say "O". (Other functions then Edit Object Styles) Or you may do all the editing/naming in one batch after the entire construction is completed.
3. Create a free point "R" to serve as the other end point of a radius.
4. Create the unit circle UC. (Line Tools then Circle, then follow the hint at the very bottom, or follow your instinct :-)
5. Create a free point "A" outside the circle, and make it large and blue. The color and size options can be changed in the Edit Object Styles dialog, the same place where its name is input.
the length of OQ, a slightly simpler task than finding the position of Q. Observe that AO:OP = PO:OQ . This unknown length OQ can be found as OS, using the Ratio Construction where R is any point on
the circle. (See figure, lower part.) Once S is created, the two points of support can be easily found:
1. Draw a circle centered at O with radius OS. As the figure gets more and more complex, objects may happen to overlap or even mathematically coincide with each other. If you click on one of the
overlapping or coincident objects, a "???" mark is displayed, meaning that there are more than one object for you to choose from. If that is the case, be sure to hold down the mouse button until
you choose the correct object. The yellow undo button (above the tool icons) is especially useful when you make wrong moves and get confused.
2. Find the intersection Q of the new circle with OA.
3. Create the blue line A' orthogonal to OA through Q. In fact this is the just dual of A we want -- we arrive at it before finding the points of support.
4. Still, it would be nice to find the points of support anyway. They help to show the "supporting property" clearly to the audience. Just intersect line A' with the unit circle UC.
Point B and its dual line B' can be created similarly. To demonstrate the third dual property mentioned earlier, point C is chosen to be the intersection of lines A' and B'. Its dual line C' need not
be created using the duality rule. It is simply the line connecting A and B. Nontheless encouraging inquisitive students to create C' the laborious way would help convince them of this property
without resorting to algebra.
Forming Images through Convex and Concave Lenses
The formation of images through convex and/or concave lenses is dictated by two simple rules:
1. An inbound ray parallel to the axis of the lens goes, upon exiting the lens, along the straight line passing the focus.
2. An inbound ray passing through the center of the lens does not change direction.
The image of a point through a lens is thus the intersection of the above two rays. Repeat this procedure for every point of an object under observation, and one gets the image of the entire object
through a lens.
This drgeo example lens.fgeo demonstrates the results of applying these principles to a little man. Hold arbitrary parts of the little man on the left and drag him around. See how his image on the
right correspondingly moves along. Then move the focus and see how the effects of the lens switch between being convex and being concave.
It's your turn now to re-create this figure. First create a circle to be the head. Let's call the center A and the other end of the radius B. We also need the origin O, another point X to point in
the direction of positive X, their connecting line the X-axis, and a line perpendicular to it to serve as the Y-asis. (Transformation Tools then Parallel Line) We also need the focus F and require
that it stay on the X-axis. It is created using the same Point Tools then Free Point menubutton, but make sure to click right on the X-axis (a uni-dimensional object by the way) while creating it.
Then you can verify that your F is constrained to move only along the horizontal line.
Later we will want to tell drgeo to "map every point C on the circle AB by some rule to point C' ". So in a manner similar to creating F, we now create a free point C on the circle AB, another
uni-dimensional object. This is not unlike the dummy variable in a mathematical statement such as "For all integers x, ..." Again, make sure to verify that your C is constrained to move only on the
Finding C', the image of C, takes a few steps but is rather straightforward by following the two optics principles.
The fun begins when we ask drgeo to generate the locus of C' for all possible positions of C. In the Line Tools menu choose Locus, select the free point C first, then click on the dependent point C',
and start admiring the resulting ellipse. Hmmm, the image of a circle under these two optical principles is an ellipse... That's something they never said in the high school textbooks.
The rest of the figure can be completed following similar steps. The figure would look nicer if the intermediate objects are masked away. My version also paints the corresponding parts in the little
man and its image in the same color for easier identification.
Studying the Trajectory of an Object under Gravity
Given the initial velocity vector V0, one can compute the trajectory of an object under the effect of the gravity g. Many mathematical properties of the trajectory and velocity vector can be observed
by interacting with accel.fgeo (or the big5 version accel.big5.fgeo). Drag the point t along the horizon, and you will see the instantaneous velocity at each point. Notice that the x component
remains constant, while the y component decreases by an amount proportional to the change in the position of t. Drag V0 around the circle, and you can see how the direction of the velociy affects the
trajectory while the speed remains constant. One also sees why field athletes throw roughly at the 45 degree angle. Now try to increase/decrease the strength of g (actually please drag g_handle
because g is too close to the origin) and see why Armstrong jumped higher on the moon than he did on earth.
You could unmask the intermediate steps to see how this figure is constructed. Alternatively, you could read a step-by-step textual description hidden in the figure. Look closely for a vertical row
of 7 dots on the left border of the figure. Drag it towards the right and you'll find the textual description. Highlight any step and you'll see the corresponding object flash in the figure. You can
also click on the little triangle to examine the details of any interesting step. Here is a brief account of the steps in constructing the acceleration example.
1. Create the horizon and a circle whose center lies on the horizon.
2. Create a point V0 on the circle and create its horizontal and vertical projections.
3. Create the point g on the vertical direction representing the gravity. (For ease of manipulation, I actually created g_handle first, and then used 0.3 g_handle as the true gravity.)
4. Next we find the topmost point of the trajectory (x_h, y_h). Let t_h be the time to reach there. Substitute t_h by V0_y/g into x_h = V0_x * t_h and y_h = V0_y * t_h - g * t_h^2 / 2 to find x_h =
V0_x V0_y / g and y_h = V0_y^2 / (2g). These lengths can be found using the ratio construction.
5. To simplify computation, we mentally switch to the topmost point as the origin. The equation of the trajectory becomes: y = - g * x^2 / (2 * V0_x ^2)
6. Find the point (x_f, y_f) on the trajectory with x_f = 2 y_f. On any parabola, this point has the same height (y-coordinate) as the focus. Computation shows that y_f = - V0_x^2 / (2 * g)
7. Once you get that length in the picture, the focus and the directrix can be drawn. Invoking the "equal-distance" definition of a parabola, the trajectory can be found using the Locus tool. Let's
call the free point on the horizon P, and the dependent point tracing the locus, R.
8. In the Line Tools choose the Vector tool, create the V0_x vector. Then in the Transformation Tools choose the Translation tool to translate R by the V0_x vector. This vector stays constant along
the trajectory.
9. We already have the tangent line during the locus construction. So the V0_y vector can be easily found.
To be fixed below this line
Other Interesting Features
Macro construction saves time and reduces clutterness of your figures. Commonly used constructions such as the ratio construction, and loci such as ellpises/parabola, are good candidates for macro
Tips for Commonly Used Constructions
Ratio Construction: Given lengths a, b, and c, find length d such that a:b = c:d . One possible solution: Create two line segments sharing one end point, say O, with lengths a and b, respectively.
Let's say the other end points are A and R, respectively, as named in the duality figure. Measure length c from O along OA and create the point T. This can be done, for example, using Point Tools
then Intersection to find the intersection of OA with a circle of radius c. (In general T and R are on two different circles although in the duality example they happen to be on the same circle.) Now
draw a line L parallel to AR through T (Transformation Tools then Parallel Line) (Of course you need to create line segment AR prior to that.) Finally insersect L with OR to find S, and OS then has
the desired length, d.
1. Use the locus tool to create an ellipse.
|
{"url":"https://frdm.cyut.edu.tw/~ckhung/b/ma/drgeo.en.php","timestamp":"2024-11-14T10:16:51Z","content_type":"application/xhtml+xml","content_length":"23121","record_id":"<urn:uuid:b1ee5ce5-8d8d-469a-ae7e-2b12df21144c>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00121.warc.gz"}
|
Mesh analysis
"Loop current" redirects here. For the ocean current in the Gulf of Mexico, see
Loop Current
. For the electrical signalling scheme, see
current loop
Mesh analysis (or the mesh current method) is a method that is used to solve planar circuits for the currents (and indirectly the voltages) at any place in the electrical circuit. Planar circuits are
circuits that can be drawn on a plane surface with no wires crossing each other. A more general technique, called loop analysis (with the corresponding network variables called loop currents) can be
applied to any circuit, planar or not. Mesh analysis and loop analysis both make use of Kirchhoff’s voltage law to arrive at a set of equations guaranteed to be solvable if the circuit has a
solution.^[1] Mesh analysis is usually easier to use when the circuit is planar, compared to loop analysis.^[2]
Mesh currents and essential meshes
Mesh analysis works by arbitrarily assigning mesh currents in the essential meshes (also referred to as independent meshes). An essential mesh is a loop in the circuit that does not contain any other
loop. Figure 1 labels the essential meshes with one, two, and three.^[3]
A mesh current is a current that loops around the essential mesh and the equations are set solved in terms of them. A mesh current may not correspond to any physically flowing current, but the
physical currents are easily found from them.^[2] It is usual practice to have all the mesh currents loop in the same direction. This helps prevent errors when writing out the equations. The
convention is to have all the mesh currents looping in a clockwise direction.^[3] Figure 2 shows the same circuit from Figure 1 with the mesh currents labeled.
Solving for mesh currents instead of directly applying Kirchhoff's current law and Kirchhoff's voltage law can greatly reduce the amount of calculation required. This is because there are fewer mesh
currents than there are physical branch currents. In figure 2 for example, there are six branch currents but only three mesh currents.
Setting up the equations
Each mesh produces one equation. These equations are the sum of the voltage drops in a complete loop of the mesh current.^[3] For problems more general than those including current and voltage
sources, the voltage drops will be the impedance of the electronic component multiplied by the mesh current in that loop.^[4]
If a voltage source is present within the mesh loop, the voltage at the source is either added or subtracted depending on if it is a voltage drop or a voltage rise in the direction of the mesh
current. For a current source that is not contained between two meshes, the mesh current will take the positive or negative value of the current source depending on if the mesh current is in the same
or opposite direction of the current source.^[3] The following is the same circuit from above with the equations needed to solve for all the currents in the circuit.
Once the equations are found, the system of linear equations can be solved by using any technique to solve linear equations.
Special cases
There are two special cases in mesh current: currents containing a supermesh and currents containing dependent sources.
A supermesh occurs when a current source is contained between two essential meshes. The circuit is first treated as if the current source is not there. This leads to one equation that incorporates
two mesh currents. Once this equation is formed, an equation is needed that relates the two mesh currents with the current source. This will be an equation where the current source is equal to one of
the mesh currents minus the other. The following is a simple example of dealing with a supermesh.^[2]
Dependent sources
A dependent source is a current source or voltage source that depends on the voltage or current of another element in the circuit. When a dependent source is contained within an essential mesh, the
dependent source should be treated like an independent source. After the mesh equation is formed, a dependent source equation is needed. This equation is generally called a constraint equation. This
is an equation that relates the dependent source’s variable to the voltage or current that the source depends on in the circuit. The following is a simple example of a dependent source.^[2]
See also
1. ↑ Hayt, William H., & Kemmerly, Jack E. (1993). Engineering Circuit Analysis (5th ed.), New York: McGraw Hill.
2. 1 2 3 4 Nilsson, James W., & Riedel, Susan A. (2002). Introductory Circuits for Electrical and Computer Engineering. New Jersey: Prentice Hall.
3. 1 2 3 4 Lueg, Russell E., & Reinhard, Erwin A. (1972). Basic Electronics for Engineers and Scientists (2nd ed.). New York: International Textbook Company.
4. ↑ Puckett, Russell E., & Romanowitz, Harry A. (1976). Introduction to Electronics (2nd ed.). San Francisco: John Wiley and Sons, Inc.
External links
This article is issued from
- version of the 10/25/2016. The text is available under the
Creative Commons Attribution/Share Alike
but additional terms may apply for the media files.
|
{"url":"https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Mesh_analysis.html","timestamp":"2024-11-08T05:10:55Z","content_type":"text/html","content_length":"18152","record_id":"<urn:uuid:78962eb1-619d-4209-b156-aa8b20500454>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00803.warc.gz"}
|
Question ID - 151709 | SaraNextGen Top Answer
The cell has an emf of 2V and the internal resistance of 3.9
a) 1.95 b) 1.5V c) 2V d) 1.8V
The cell has an emf of 2V and the internal resistance of 3.9
When a cell of emf E is connected to a resistance of 3.9
|
{"url":"https://www.saranextgen.com/homeworkhelp/doubts.php?id=151709","timestamp":"2024-11-11T02:01:00Z","content_type":"text/html","content_length":"17222","record_id":"<urn:uuid:2170e910-58ab-4f1b-a1b8-85cb3bcc55c1>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00194.warc.gz"}
|
Selecting the number of
This vignette is a short tutorial on the use of the core functions of the bootPLS package.
These functions can also be used to reproduce the simulations and the figures of the book chapter Magnanensi et al. (2016) and of the two articles https://doi.org/10.1007/978-3-319-40643-5_18,
Magnanensi et al. (2017) https://doi.org/10.1007/s11222-016-9651-4 and Magnanensi et al. (2021) https://doi.org/10.3389/fams.2021.693126.
Pine real dataset: pls and spls regressions
Loading and displaying dataset
Load and display the pinewood worm dataset.
data(pine, package = "plsRglm")
Michel Tenenhaus’ reported in his book, La régression PLS (1998) Technip, Paris, that most of the expert biologists claimed that this dataset features two latent variables, which is tantamount to the
PLS model having two components.
PLS LOO and CV
Leave one out CV (K=nrow(pine)) one time (NK=1).
bbb <- plsRglm::cv.plsR(log(x11)~.,data=pine,nt=6,K=nrow(pine),NK=1,verbose=FALSE)
Set up 6-fold CV (K=6), 100 times (NK=100), and use random=TRUE to randomly create folds for repeated CV.
Display the results of the cross-validation.
The \(Q^2\) criterion is recommended in that PLSR setting without missing data. A model with 1 component is selected by the cross-validation as displayed by the following figure. Hence the \(Q^2\)
criterion (1 component) does not agree with the experts (2 components).
As for the CV Press criterion it is unable to point out a unique number of components.
PLS (Y,T) Bootstrap
The package features our bootstrap based algorithm to select the number of components in plsR regression. It is implemented with the nbcomp.bootplsR function.
The verbose=FALSE option suppresses messages output during the algorithm, which is useful to replicate the bootstrap technique. To set up parallel computing, you can use the parallel and the ncpus
res_boot_rep <- replicate(20,nbcomp.bootplsR(Y=ypine,X=Xpine,R =500,verbose =FALSE,parallel = "multicore",ncpus = 2))
It is easy to display the results with the barplot function.
A model with two components should be selected using our bootstrap based algorithm to select the number of components. Hence the number of component selected with our algorithm agrees with what was
stated by the experts.
sPLS (Y,T) Bootstrap
The package also features our bootstrap based algorithm to select, for a given \(\eta\) value, the number of components in spls regression. It is implemented with the nbcomp.bootspls function.
A doParallel and foreach based parallel computing version of the algorithm is implemented as the nbcomp.bootspls.para function.
Bootstrap (Y,X) for the coefficients with number of components updated for each resampling
Pinewood worm data reloaded.
data(pine, package = "plsRglm")
datasetpine <- cbind(ypine,Xpine)
Replicate the results to get the bootstrap distributions of the selected number of components and the coefficients.
Parallel computing support with the ncpus and parallel="multicore" options.
Aze real dataset: binary logistic plsRglm and sgpls regressions
PLSGLR (Y,T) Bootstrap
Loading the data and creating the data frames.
dataset <- cbind(y=yaze_compl,Xaze_compl)
Fitting a logistic PLS regression model with 10 components. You have to use the family option when fitting the plsRglm.
Perform the bootstrap based algorithm with the nbcomp.bootplsRglm function. By default 250 resamplings are carried out.
Plotting the bootstrap distributions of the coefficients of the components.
Computing the bootstrap based confidence intervals of the coefficients of the components.
Computing the bootstrap based confidence intervals of the coefficients of the components.
sparse PLSGLR (Y,T) Bootstrap
The package also features our bootstrap based algorithm to select, for a given \(\eta\) value, the number of components in sgpls regression. It is implemented in the nbcomp.bootsgpls.
data(prostate, package="spls")
nbcomp.bootsgpls((prostate$x)[,1:30], prostate$y, R=250, eta=0.2, typeBCa = FALSE)
A doParallel and foreach based parallel computing version of the algorithm is implemented as the nbcomp.bootspls.para function.
nbcomp.bootsgpls.para((prostate$x)[,1:30], prostate$y, R=250, eta=c(.2,.5), maxnt=10, typeBCa = FALSE)
|
{"url":"http://cran.rediris.es/web/packages/bootPLS/vignettes/UseNbcomp.html","timestamp":"2024-11-09T06:16:56Z","content_type":"text/html","content_length":"84091","record_id":"<urn:uuid:a69eb141-d43b-455a-b3ca-afa09d3eb56d>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00852.warc.gz"}
|
An Etymological Dictionary of Astronomy and Astrophysics
angular momentum catastrophe
نگونزار ِجنباک ِزاویهای
negunzâr-e jonbâk-e zâviye-yi
Fr.: catastrophe du moment angulaire
A problem encountered by the → cold dark matter model of galaxy formation. The model predicts too small systems lacking → angular momentum, in contrast to real, observed galaxies. → cusp problem; →
missing dwarfs.
→ angular; → momentum; → catastrophe
angular momentum parameter
پارامون ِجنباک ِزاویهای
pârâmun-e jonbâk-e zâviye-yi
Fr.: paramètre de moment angulaire
The ratio J/M, where J is the → angular momentum of a → rotating black hole and M the mass of the black hole.
→ angular; → momentum; → parameter.
angular momentum problem
پراسهی ِجنباک ِزاویهای
parâse-ye jonbâk-e zâviye-yi
Fr.: problème de moment angulaire
1) The fact that the Sun, which contains 99.9% of the mass of the → solar system, accounts for about 2% of the total → angular momentum of the solar system. The problem of outward → angular momentum
transfer has been a main topic of interest for models attempting to explain the origin of the solar system.
2) More generally, in star formation studies, the question of the origin of the angular momentum of a star and the evolution of its distribution during the early history of a star. Consider a
filamentary molecular cloud with a length of 10 pc and a radius of 0.2 pc, rotating about its long axis with a typical → angular velocity of Ω = 10^-15 s^-1. At a matter density of 20 cm^-3, the
cloud is about 1 → solar mass. The cloud collapses to form a star with radius of 6 x 10^10 cm. The conservation of angular momentum (∝ ΩR^2) requires that as the radius decreases from 0.2 pc to the
stellar value, a factor of 10^7, the value of Ω must increase by 14 orders of magnitude to 10^-1 s^-1. The star's rotational velocity will be 20% the speed of light and the ratio of → centrifugal
force to gravity at the equator will be about 10^4. Observational data, however, indicate that the youngest stars are in fact rotating quite slowly, with rotational velocities of 10% of the →
break-up velocity. The angular momentum problem was first studied in the context of single stars forming in isolation (L. Mestel, 1965, Quart. J. R. Astron. Soc. 6, 161). For more information see,
e.g., P. Bodenheimer, 1995, ARAA 33, 199; H. Zinnecker, 2004, RevMexAA 22, 77; R. B. Larson, 2010, Rep. Prog. Phys. 73, 014901, and references therein.
→ angular; → momentum; → problem.
angular momentum transfer
تراوژ ِجنباک ِزاویهای
tarâvaž-e jonbâk-e zâviye-yi
Fr.: transfert de moment angulaire
A process whereby in a rotating, non-solid system matter is displaced toward (→ accretion) or away from (→ mass loss) the rotation center. See also → magnetorotational instability.
→ angular; → momentum; → transfer.
angular momentum transport
ترابرد ِجنباک ِزاویهای
tarâbord-e jonbâk-e zâviye-yi
Fr.: transfert de moment angulaire
Same as → angular momentum transfer.
→ angular; → momentum; → transport.
orbital angular momentum
جنباک ِزاویهای ِمداری
jonbâk-e zâviyeyi-ye madâri
Fr.: moment cinétique orbital, ~ angulaire ~
1) Mechanics: The → angular momentum associated with the motion of a particle about an origin, equal to the cross product of the position vector (r) with the linear momentum (p = mv): L = r x p.
Although r and p are constantly changing direction, L is a constant in the absence of any external force on the system. Also known as orbital momentum.
2) Quantum mechanics: The → angular momentum operator associated with the motion of a particle about an origin, equal to the cross product of the position vector with the linear momentum, as opposed
to the → spin angular momentum. In quantum mechanics the orbital angular momentum is quantized. Its magnitude is confined to discrete values given by the expression: ħ &radic l(l + 1), where l is the
orbital angular momentum quantum number, or azimuthal quantum number, and is limited to positive integral values (l = 0, 1, 2, ...). Moreover, the orientation of the direction of rotation is
quantized, as determined by the → magnetic quantum number. Since the electron carries an electric charge, the circulation of electron constitutes a current loop which generates a magnetic moment
associated to the orbital angular momentum.
→ orbital; → angular; → momentum.
rotational angular momentum
جنباک ِزاویهای ِچرخشی
jonbâk-e zâviyeyi-ye carxeši
Fr.: moment angulaire rotationnel, moment cinétique ~
The → angular momentum of a body rotating about an axis. The rotational angular momentum of a solid homogeneous sphere of mass M and radius R rotating about an axis passing through its center with a
period of T is given by: L = 4πMR^2/5T.
→ rotational; → angular; → momentum.
specific angular momentum
جنباک ِزاویهای ِآبیزه
jonbâk-e zâvie-yi-ye âbizé
Fr.: moment angulaire spécifique
→ Angular momentum per unit mass.
→ specific; → angular; → momentum.
spin angular momentum
جنباک ِزاویهای ِاسپین
jonbâk-e zâviyeyi-ye espin
Fr.: moment angulaire de spin
An intrinsic quantum mechanical characteristic of a particle that has no classical counterpart but may loosely be likened to the classical → angular momentum of a particle arising from rotation about
its own axis. The magnitude of spin angular momentum is given by the expression S = ħ √ s(s + 1), where s is the → spin quantum number. As an example, the spin of an electron is s = 1/2; this means
that its spin angular momentum is (ħ /2) √ 3 or 0.91 x 10^-34 J.s. In addition, the projection of an angular momentum onto some defined axis is also quantized, with a z-component S[z] = m[s]ħ. The
only values of m[s] (magnetic quantum number) are ± 1/2. See also → Stern-Gerlach experiment.
→ spin; → angular; → momentum.
|
{"url":"https://dictionary.obspm.fr/?showAll=1&formSearchTextfield=angular+momentum","timestamp":"2024-11-11T23:43:09Z","content_type":"text/html","content_length":"24457","record_id":"<urn:uuid:1f6fcc8b-c17c-4d18-87db-f7d20702fbe4>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00120.warc.gz"}
|
7.06 Add decimals | Grade 4 Math | Virginia SOL 4 - 2022 Edition
Are you ready?
Do you remember how to use the standard algorithm to add numbers together? Let's try this problem to practice.
Find the value of $285+113$285+113.
• You might notice that sometimes the standard algorithm is called the 'vertical algorithm'. Let's think about why. When we use the standard algorithm, we line our numbers up in 'vertical' place
value columns.
This video shows us how to add numbers with decimals using the standard algorithm.
Question 1
Find $0.15+0.61$0.15+0.61 giving your answer as a decimal.
This video shows us how to continue patterns with decimals using addition.
Question 2
Consider the following pattern.
1. What is the pattern?
$0.03$0.03 $0.12$0.12 $0.21$0.21 $\editable{}$ $\editable{}$ $\editable{}$
The numbers are increasing by $9$9.
The numbers are increasing by $0.9$0.9.
The numbers are increasing by $0.09$0.09.
The numbers are increasing by $90$90.
2. Now complete the pattern.
$0.03$0.03 $0.12$0.12 $0.21$0.21 $\editable{}$ $\editable{}$ $\editable{}$
When setting up the standard algorithm for numbers with decimals, we must line up the decimal points so that we are adding digits with the same place value.
|
{"url":"https://mathspace.co/textbooks/syllabuses/Syllabus-1117/topics/Topic-21515/subtopics/Subtopic-276799/?ref=blog.mathspace.co","timestamp":"2024-11-04T07:57:43Z","content_type":"text/html","content_length":"383869","record_id":"<urn:uuid:4e14d043-ab71-4c5f-bf55-4f7b889f1fe8>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00199.warc.gz"}
|
How a Secret Society Discovered Irrational Numbers
Schematic of the bioreversible anionic cloaking strategy. Chemical modification of surface-exposed lysines with sulfonated cloaking reagents enables complexation and subsequent delivery of protein
cargos with cationic lipids. Following endocytic escape, the reagents are cleaved off via the presence of a self-immolative, redox-sensitive disulfide bond to tracelessly deliver the cargo protein in
the cytoplasm of a cell. Credit: ACS Central Science (2024). DOI: 10.1021/acscentsci.4c00071 An interdisciplinary collaboration has designed a way to «cloak» proteins so that they can be captured by
lipid nanoparticles, which are akin to tiny bubbles of fat. These bubbles are small enough to sneak their hidden cargo into living cells, where the proteins uncloak and exert their therapeutic
effect. The generalizable technique could lead to repurposing thousands of commercial protein products, including antibodies, for biological research and therape
How a Secret Society Discovered Irrational Numbers
Myths and legends surround the origins of these numbers
By Manon Bischoff
Jakub Krechowicz/Alamy Stock Photo
The ancient scholar Hippasus of Metapontum was punished with death for his discovery of irrational numbers—or at least that’s the legend. What actually happened in the fifth century B.C.E. is far
from clear.
Hippasus was a Pythagorean, a member of a sect that dealt with mathematics and number mysticism, among other things. A core element of the Pythagoreans’ teachings related to harmonic numerical
relationships, which included fractions of whole numbers.
The whole world, they believed, could be described using rational numbers, including natural numbers and fractions. Yet when Hippasus examined the length ratios of a pentagram—the symbol of the
Pythagoreans—the story goes, he realized that some of the lengths of the shape’s sides could not be expressed as fractions. He thus provided the first proof of the existence of irrational numbers.
On supporting science journalism
If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the
discoveries and ideas shaping our world today.
From here, the accounts of Hippasus diverge. Some say that the Pythagoreans took offense at this assertion because such numbers went against their worldview. In other tales, Hippasus made his results
public and thus violated the sect’s secrecy. Either way, he drowned in the sea after his discovery. Some reports claim that the Pythagoreans threw him off a ship. Others assert that his death was an
accident that the Pythagoreans regarded as divine punishment.
Current interpretations of the available historic evidence, however, suggest that these stories are pure legend. Hippasus’ discovery—assuming he even made it—was likely to have been hailed as a
mathematical achievement that made the Pythagoreans proud. In fact, many questionable stories swirl around the Pythagoreans who were persecuted for their philosophical and political ideas.
The available facts are limited. The community was probably founded in what is now southern Italy by Pythagoras of Samos—the Greek scholar after whom the famous Pythagorean theorem is named (although
it is also unclear whether he proved the theorem). In addition to their interest in mathematics, the Pythagoreans had a number of views that set them apart from others in ancient Greece. They
rejected wealth, lived a vegetarian, ascetic lifestyle and believed in reincarnation. Eventually, the group suffered several attacks and, after Pythagoras’ death, the community disappeared
Regarding the tale of Hippasus, the element that historians agree is most likely true is that the Pythagoreans at some point proved the incommensurability of certain quantities, from which the
existence of irrational numbers follows.
Numbers beyond Fractions
We now learn in school that some values—the so-called irrational numbers—cannot be expressed as the ratio of two integers. But this realization is far from obvious. After all, irrational values can
at least be approximated by fractions—although that is sometimes difficult.
The famed proof of irrational numbers presented by Hippasus—or another Pythagorean—is most easily illustrated with an isosceles right triangle: consider a triangle with two sides, each of length a,
that form a right angle opposite a hypotenuse of length c.
The existence of irrational numbers is best explained with an isosceles right triangle—that is, a triangle with two sides of an equal length that form a right angle.
Manon Bischoff/Spektrum der Wissenschaft
Such a triangle has a fixed aspect ratio ^a⁄[c]. If both a and c are rational numbers, the lengths of the sides of the triangle can be chosen so that a and c each correspond to the smallest possible
natural number (that is, they have no common divisor). For example, if the aspect ratio were ^2/[3] , you would choose a = 2 and c = 3. Assuming that the lengths of the triangle correspond to
rational numbers, a and c are integers and have no common divisor—or so everyone thought.
Proof by Contradiction
Hippasus used this line of thinking to create a contradiction, which in turn proved that the original assumption must be wrong. First, he used the Pythagorean theorem (good old a^2 + b^2 = c^2) to
express the length of the hypotenuse c as a function of the two equal sides a. Or, to put that mathematically: 2a^2 = c^2. Because a and c are integers, it follows from the previous equation that c^2
must be an even number. Accordingly, c is also divisible by 2: c = 2n, where n is a natural number.
Substituting c = 2n into the original equation gives: 2a^2 = (2n)^2 = 4n^2. The 2 can be reduced on both sides, giving the following result: a^2 = 2n^2. Because a is also an integer, it follows that
a is squared and therefore is an even number. This conclusion contradicts the original assumption, however, because if a and c are both even, neither of them can be a divisor.
This contradiction allowed Hippasus to conclude that the aspect ratio of an isosceles right triangle ^a⁄[c] cannot correspond to a rational number. In other words, there are numbers that cannot be
represented as the ratio of two integer values. For example, if the right angle forming sides a = 1, then the hypotenuse c = √2. And as we know today, √2 is an irrational number with decimal places
that continue indefinitely without ever repeating.
From our current perspective, the existence of irrational values does not seem too surprising because we are confronted with this fact at a young age. But we can only imagine what this realization
might have prompted some 2,500 years ago. It could have turned the mathematical worldview upside down. So it’s no wonder that there are so many myths and legends about its discovery.
This article originally appeared in Spektrum der Wissenschaft and was reproduced with permission.
|
{"url":"https://www.dishusa.net/2024/06/how-secret-society-discovered.html","timestamp":"2024-11-11T23:23:27Z","content_type":"application/xhtml+xml","content_length":"112776","record_id":"<urn:uuid:5a5c8a1e-c0c3-416e-a745-d96bdcd4b4db>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00720.warc.gz"}
|
EUPollMap: the European atlas of contemporary pollen distribution maps derived from an integrated Kriging interpolation approach
Articles | Volume 16, issue 1
© Author(s) 2024. This work is distributed under the Creative Commons Attribution 4.0 License.
EUPollMap: the European atlas of contemporary pollen distribution maps derived from an integrated Kriging interpolation approach
Interactive discussion
Status: closed
Comment types: AC – author | RC – referee | CC – community | EC – editor | CEC – chief editor | : Report abuse
AR: Author's response | RR: Referee report | ED: Editor decision | EF: Editorial file upload
Publish as is (12 Dec 2023) by Birgit Heim
by Fabio Oriani on behalf of the Authors (14 Dec 2023)
|
{"url":"https://essd.copernicus.org/articles/16/731/2024/essd-16-731-2024-discussion.html","timestamp":"2024-11-08T21:59:55Z","content_type":"text/html","content_length":"188818","record_id":"<urn:uuid:32cb76ec-99cb-44d8-8706-ed1ce3d8ab62>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00138.warc.gz"}
|
Notion of Mathematics
The Notion of Limits in Mathematics
The basic principle behind the notion of constraints in mathematics is that there are many objects and principles that could be used to spell out infinity. You’re not confined to a wisdom; you could
discover and apply that limitless potential. And when you do, you have infinite chances for yourself and the whole world.
Every life has limits. They are based in space and time and are characterized by both the laws and limitations of character. It will be effortless to say that the limit of individual society and our
limits have been death, but we should recognize we all have been free and that if we perish we dwell beyond the limits that we set ourselves up. We now live, with the concepts of constraints in
mathematics, in other words.
For the notion of limits in mathematics www.paramountessays.com/do-my-homework could be foreign to them. Then they seem to expand and contract in line with the laws of mathematics, When you glance at
certain matters, including atoms. Limits in mathematics’ idea works virtually identical. The legislation of science are the exact nature and give us the various tools to work out and use our
Limits measured and are available, and so they help people understand those constraints and how you can work around them. Many individuals find them simple to grasp and affect their lifestyles Though
mathematical concepts are more complex. And the limits certainly are something that all people can use to enhance as a learning software.
Constraints are nothing to panic. The fundamental idea of mathematics, as we all saw it, is the principle which we are tied to mathematics and nothing more. At the notion of limits in math, the
concepts of being and infinity exist; we opt to limit ourselves to ideas of those.
Many people would really like to find out more. One way of accomplishing so will be to attempt to identify and find out fundamentals or a regulation which explain the notion of limits from math. What
happens is https://www.fit.edu/ the mind keeps looking for something that your head perceives as unlimited. In other words, your mind extends back to the thought of distance and it compels the mind
to figure out the bounds of space. Quite simply, your head discovers a means to warrant the occurrence of boundaries.
We must see that the notion of constraints in math is not really any different than the notion of limits in mathematics. We can have a look at the way we use the notion of constraints from
mathematics, to enable a person understand this. We may test how we utilize the notion of limitations from buy argumentative essay math.
To get started with, we have to realize that the world is filled with infinitesimal particles. These particles have various levels of movements. It is and so the particles can not all exist in the
same place at the same moment. There are various rates of motion for those particles. That means that if we measure the rate of an individual particle, we will find that the particles have various
levels of motion.
Naturally, there isn’t any good reason behind part of the universe to possess exactly the very same speed being an individual particle because the particles which we see all participate in different
pieces of the universe also it doesn’t seem sensible to attempt and use one rate for a yardstick. We’re utilizing a standard to detect that portion of the universe is shifting when we are saying that
the rate of contamination is much over the speed of lighting. We are measuring the rate of an entity. Nevertheless, it might be confusing to a lot of folks, although This is a exact basic way of
thinking. It is founded on the basic understanding of physics.
Additionally, once we’ve different rates of particles, there is definitely an equivalent in the universe of”mass” We are aware that there is not any mass from the vacuum of distance. Particles have
no mass since they have different speeds of movement, Even as we discussed. But they’re all made up of precisely exactly the compound: power. And it that we are able to use the notion of limits from
|
{"url":"https://metasail.info/index.php/2020/05/28/notion-of-mathematics/","timestamp":"2024-11-12T10:27:26Z","content_type":"text/html","content_length":"37466","record_id":"<urn:uuid:a4079151-1a24-4ac2-81d3-666e249fcd81>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00597.warc.gz"}
|
Calculating sub-atomic particles from atomic number and mass number
We can calculate the number of sub-atomic particles (i.e. electrons, protons, neutrons) if the atomic mass or atomic number is provided for an element. Similarly, the atomic number and mass number
can be calculated for any element if the number of subatomic particles is known.
From the definitions of Atomic number and Atomic mass, we know:
Atomic number = the number of protons
Mass number = the number of protons + the number of neutrons
From these, we can deduct:
Number of neutrons = Mass number – Atomic number
Atom has no overall charge, which means the there are equal number of negatively charged electrons and positively charged protons. If we know the number of protons (or atomic number) of an atom, this
will be equal to the number of electrons of that atom.
Number of electrons = Number of protons
Example Question:
Calculate the sub-atomic particles for
From the provided data, we know:
Mass number: 23
Atomic number: 11
As we know:
Number of protons = Atomic number
Number of protons = 11
Since, Number of electrons = Number of protons
Therefore, Number of electrons = 11
As we know:
Mass number = number of protons + neutrons
and, Atomic number = number of protons
Number of neutrons = Mass number – Atomic number
= 23 – 11
Number of neutrons = 12
|
{"url":"http://passmyexams.co.uk/GCSE/chemistry/calculating-sub-atomic-particles.html","timestamp":"2024-11-09T01:17:18Z","content_type":"application/xhtml+xml","content_length":"13629","record_id":"<urn:uuid:51c01f39-1e62-4b91-9835-9b3f07f587f3>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00458.warc.gz"}
|
What is a Hydraulic Jump? in context of height of hydraulic jump formula
30 Aug 2024
Title: The Hydraulic Jump: A Phenomenon of Fluid Dynamics
The hydraulic jump is a fascinating phenomenon that occurs when a fluid, such as water or air, flows over an obstacle or into a new container with a significantly different depth. This article
provides an in-depth explanation of the hydraulic jump, its characteristics, and the underlying physics. We will also explore the height of the hydraulic jump formula, which is crucial for
understanding this phenomenon.
The hydraulic jump is a type of fluid flow that occurs when a fluid flows over an obstacle or into a new container with a significantly different depth. This phenomenon was first described by French
engineer Claude-Louis Navier in 1827 and has since been extensively studied in various fields, including civil engineering, environmental science, and physics.
The hydraulic jump is characterized by the sudden change in fluid flow from a smooth, laminar flow to a turbulent, chaotic flow. This transition occurs when the fluid flows over an obstacle or into a
new container with a significantly different depth. The resulting flow is often referred to as a “jump” because of its sudden and dramatic change.
The hydraulic jump is governed by the principles of fluid dynamics, specifically the Navier-Stokes equations. These equations describe the motion of fluids and are used to predict the behavior of
fluids under various conditions.
One of the key factors that determines the height of a hydraulic jump is the Froude number (Fr), which is defined as:
Fr = U / (g \* h)
where U is the velocity of the fluid, g is the acceleration due to gravity, and h is the depth of the fluid.
The Froude number is a dimensionless quantity that characterizes the flow regime. For a hydraulic jump, the Froude number is typically greater than 1, indicating a turbulent flow.
Height of Hydraulic Jump Formula:
The height of a hydraulic jump (H) can be estimated using the following formula:
H = (0.5 \* U^2) / g
where U is the velocity of the fluid and g is the acceleration due to gravity.
This formula is often referred to as the “height of hydraulic jump” or “hydraulic jump height” formula. It is a simple yet powerful tool for predicting the height of a hydraulic jump, which is
essential for designing and optimizing various engineering systems, such as dams, canals, and pipelines.
In conclusion, the hydraulic jump is a fascinating phenomenon that occurs when a fluid flows over an obstacle or into a new container with a significantly different depth. The underlying physics are
governed by the Navier-Stokes equations, and the height of the hydraulic jump can be estimated using the formula H = (0.5 \* U^2) / g. This formula is a valuable tool for engineers and scientists
working in various fields, including civil engineering, environmental science, and physics.
1. Navier, C.-L. (1827). Mémoire sur les lois du mouvement des fluides. Journal de l’École Polytechnique, 10, 338-355.
2. Rouse, H. (1936). The hydraulic jump. Transactions of the American Society of Civil Engineers, 101, 1-24.
ASCII Format:
Here is the formula in ASCII format:
H = (0.5 * U^2) / g
Note: The * symbol represents multiplication, and the ^ symbol represents exponentiation.
Related articles for ‘height of hydraulic jump formula’ :
Calculators for ‘height of hydraulic jump formula’
|
{"url":"https://blog.truegeometry.com/tutorials/education/9fcd253d03e8c8c3c0b3bf9faa0bba38/JSON_TO_ARTCL_What_is_a_Hydraulic_Jump_in_context_of_height_of_hydraulic_jump_f.html","timestamp":"2024-11-06T08:28:43Z","content_type":"text/html","content_length":"18771","record_id":"<urn:uuid:78d65473-af17-4db6-b011-e29d8b23b8c1>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00513.warc.gz"}
|
Hackerrank Drawing Book (C++) - Code Probs
C++ :
int pageCount(int n, int p) {
* Write your code here.
int noOfPagesToTurn = 0, temp = 0;
//starting from the beginning
if(p == 1 || p == n || (p % 2 == 0 && n - p == 1)){
noOfPagesToTurn = 0;
int startingPage = 2;
noOfPagesToTurn = 0;
for(; startingPage < n; startingPage+=2){
if(startingPage == p || (startingPage + 1) == p){
//check if book's max page is odd or even
if(n % 2 == 0){
startingPage = n - 2;
startingPage = n - 3;
temp = 0;
for(; startingPage > 1; startingPage-=2){
if(startingPage == p || (startingPage + 1) == p){
if(temp < noOfPagesToTurn){
noOfPagesToTurn = temp;
return noOfPagesToTurn;
Set both noOfPagesToTurn and temp variables to 0. The first if checks if the page to turn to is 1, the last page n, or if the book has odd no. of pages and the page to turn to is n – 1 (If book has
say, 7 pages, and page to turn to if 6, page 6 is just beside page 7, so no need to turn further.), if page p meets any of the condition above, additional flips required if 0, so 0 is returned.
Otherwise, starting from the next page after the beginning of the book, set the starting page to 2. For each +2 pages, check if the page p has been flipped to, by checking the number with starting
page or the page adjacent to it. The total number of flips needed from the beginning of the book is recorded in noOfPagesToTurn. Basically, this checks the page number on the left side of the book.
Next, starting from the back of the book, check if the max page number n is odd or even. If it’s odd, start 3 pages before the last page, otherwise, start 2 pages. Similarly, this checks the page
number on the left side of the book, only this time starting from the back. Each flip is recorded in temp.
At the end, check if flipping from the front or the back or the back of the book requires more flips. Assign the lowest number possible to noOfPagesToFlip and return it.
While checking test cases for the problem, another possible solution came up. This solution is to split the total number of pages into 2 (find average number of pages), and check if the page to flip
to is nearer to the back or front of the book. Flip from the back or front as needed. Perhaps I’ll implement this in another language when we do another run though of algorithm practice.
|
{"url":"https://codeprobs.com/hackerrank-drawing-book-c/","timestamp":"2024-11-02T09:41:18Z","content_type":"text/html","content_length":"62227","record_id":"<urn:uuid:410bb25b-7ab3-43d4-b9d4-9f5cc9288511>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00197.warc.gz"}
|
Capacitance Calculator
The capacitance calculator will calculate capacitance of any kind of capacitor. Check how changing the distance between plates increases or decreases capacitance accordingly. Get results in other
related units as well.
What Is Capacitance?
“It is the ability of a capacitor to store charge”
The capacitance of a capacitor is always dependent on two factors that include:
• Dielectric medium
• Distance between the capacitor plates
Parallel Plate Capacitor Formula:
Our parallel plate capacitor calculator uses the standard equation to calculate capacitor capacitance. However, if your goal comes up with manual calculations, follow the formula:
Capacitance = ε Area / Distance Or C = ε A / s
ε = 8.854 pF / m
The above permittivity value is the standard that is used used by this capacitor capacitance calculator with no specific capacitance entered.
How To Find Capacitance?
Basically, capacitance is the ratio of the charge in a capacitor to the voltage across its plates. Let us figure out through an example!
If the area occupied by the capacitor plates is about 125 mm^2 and the separation between plates is about 7 mm, then how to calculate capacitance? (The relative permittivity of space is about
0.000124 F/m.)
Using the parallel plate capacitance formula:
C = ε A / s
C = (0.000124 F/m * 125 mm^2) / 7mm
C = 0.0155 / 7 C = 0.00221 Farads
Which is the required capacitance at which the parallel plate capacitor will work normally without trouble.
Working of Capacitance Calculator:
Our calculator requires certain values to calculate capacitance. Let’s explore these ones by one!
• Enter the values of area, permittivity, and space between plates
• Tap Calculate
Output: The capacitance of a parallel plate capacitor Conversions in other related units of measurement
What Is The Value of K In Capacitance?
• For free space, k=1
• For all other media, k>1
To accurately calculate capacitance with any value of k, you may better let this capacitance calculator do all maths for you.
What Is a Normal Capacitance?
The normal capacitance value ranges typically from 1nF to 1µF.
What Causes Capacitance To Increase?
The increasing area of the plate is directly proportional to the capacitance. So to get more capacitance value, you need to use a capacitor with a high capacitance value.
What Causes Negative Capacitance?
When the change introduced in charge changes the voltage value but in the opposite direction, the capacitance will be considered negative.
Capacitance Conversion Chart:
nF pF/ MMFD
1uF / MFD 1000nF 1000000pF(MMFD)
0.82uF / MFD 820nF 820000pF (MMFD)
0.8uF / MFD 800nF 800000pF (MMFD)
0.7uF / MFD 700nF 700000pF (MMFD)
0.68uF / MFD 680nF 680000pF (MMFD)
0.6uF / MFD 600nF 600000pF (MMFD)
0.56uF / MFD 560nF 560000pF (MMFD)
0.5uF / MFD 500nF 500000pF (MMFD)
0.47uF / MFD 470nF 470000pF (MMFD)
0.4uF / MFD 400nF 400000pF (MMFD)
0.39uF / MFD 390nF 390000pF (MMFD)
0.33uF / MFD 330nF 330000pF (MMFD)
0.3uF / MFD 300nF 300000pF (MMFD)
0.27uF / MFD 270nF 270000pF (MMFD)
0.25uF / MFD 250nF 250000pF (MMFD)
0.22uF / MFD 220nF 220000pF (MMFD)
0.2uF / MFD 200nF 200000pF (MMFD)
0.18uF / MFD 180nF 180000pF (MMFD)
0.15uF / MFD 150nF 150000pF (MMFD)
0.12uF / MFD 120nF 120000pF (MMFD)
0.1uF / MFD 100nF 100000pF (MMFD)
0.082uF / MFD 82nF 82000pF (MMFD)
0.08uF / MFD 80nF 80000pF (MMFD)
0.07uF / MFD 70nF 70000pF (MMFD)
0.068uF / MFD 68nF 68000pF (MMFD)
0.06uF / MFD 60nF 60000pF (MMFD)
0.056uF / MFD 56nF 56000pF (MMFD)
0.05uF / MFD 50nF 50000pF (MMFD)
0.047uF / MFD 47nF 47000pF (MMFD)
0.04uF / MFD 40nF 40000pF (MMFD)
0.039uF / MFD 39nF 39000pF (MMFD)
0.033uF / MFD 33nF 33000pF (MMFD)
0.03uF / MFD 30nF 30000pF (MMFD)
0.027uF / MFD 27nF 27000pF (MMFD)
0.025uF / MFD 25nF 25000pF (MMFD)
0.022uF / MFD 22nF 22000pF (MMFD)
0.02uF / MFD 20nF 20000pF (MMFD)
0.018uF / MFD 18nF 18000pF (MMFD)
0.015uF / MFD 15nF 15000pF (MMFD)
0.012uF / MFD 12nF 12000pF (MMFD)
0.01uF / MFD 10nF 10000pF (MMFD)
0.0082uF / MFD 8.2nF 8200pF (MMFD)
0.008uF / MFD 8nF 8000pF (MMFD)
0.007uF / MFD 7nF 7000pF (MMFD)
0.0068uF / MFD 6.8nF 6800pF (MMFD)
0.006uF / MFD 6nF 6000pF (MMFD)
0.0056uF / MFD 5.6nF 5600pF (MMFD)
0.005uF / MFD 5nF 5000pF (MMFD)
0.0047uF / MFD 4.7nF 4700pF (MMFD)
0.004uF / MFD 4nF 4000pF (MMFD)
0.0039uF / MFD 3.9nF 3900pF (MMFD)
0.0033uF / MFD 3.3nF 3300pF (MMFD)
0.003uF / MFD 3nF 3000pF (MMFD)
0.0027uF / MFD 2.7nF 2700pF (MMFD)
0.0025uF / MFD 2.5nF 2500pF (MMFD)
0.0022uF / MFD 2.2nF 2200pF (MMFD)
0.002uF / MFD 2nF 2000pF (MMFD)
0.0018uF / MFD 1.8nF 1800pF (MMFD)
0.0015uF / MFD 1.5nF 1500pF (MMFD)
0.0012uF / MFD 1.2nF 1200pF (MMFD)
0.001uF / MFD 1nF 1000pF (MMFD)
0.00082uF / MFD 0.82nF 820pF (MMFD)
0.0008uF / MFD 0.8nF 800pF (MMFD)
0.0007uF / MFD 0.7nF 700pF (MMFD)
0.00068uF / MFD 0.68nF 680pF (MMFD)
0.0006uF / MFD 0.6nF 600pF (MMFD)
0.00056uF / MFD 0.56nF 560pF (MMFD)
0.0005uF / MFD 0.5nF 500pF (MMFD)
0.00047uF / MFD 0.47nF 470pF (MMFD)
0.0004uF / MFD 0.4nF 400pF (MMFD)
0.00039uF / MFD 0.39nF 390pF (MMFD)
0.00033uF / MFD 0.33nF 330pF (MMFD)
0.0003uF / MFD 0.3nF 300pF (MMFD)
0.00027uF / MFD 0.27nF 270pF (MMFD)
0.00025uF / MFD 0.25nF 250pF (MMFD)
0.00022uF / MFD 0.22nF 220pF (MMFD)
0.0002uF / MFD 0.2nF 200pF (MMFD)
0.00018uF / MFD 0.18nF 180pF (MMFD)
0.00015uF / MFD 0.15nF 150pF (MMFD)
0.00012uF / MFD 0.12nF 120pF (MMFD)
0.0001uF / MFD 0.1nF 100pF (MMFD)
0.000082uF / MFD 0.082nF 82pF (MMFD)
0.00008uF / MFD 0.08nF 80pF (MMFD)
0.00007uF / MFD 0.07nF 70pF (MMFD)
0.000068uF / MFD 0.068nF 68pF (MMFD)
0.00006uF / MFD 0.06nF 60pF (MMFD)
0.000056uF / MFD 0.056nF 56pF (MMFD)
0.00005uF / MFD 0.05nF 50pF (MMFD)
0.000047uF / MFD 0.047nF 47pF (MMFD)
0.00004uF / MFD 0.04nF 40pF (MMFD)
0.000039uF / MFD 0.039nF 39pF (MMFD)
0.000033uF / MFD 0.033nF 33pF (MMFD)
0.00003uF / MFD 0.03nF 30pF (MMFD)
0.000027uF / MFD 0.027nF 27pF (MMFD)
0.000025uF / MFD 0.025nF 25pF (MMFD)
0.000022uF / MFD 0.022nF 22pF (MMFD)
0.00002uF / MFD 0.02nF 20pF (MMFD)
0.000018uF / MFD 0.018nF 18pF (MMFD)
0.000015uF / MFD 0.015nF 15pF (MMFD)
0.000012uF / MFD 0.012nF 12pF (MMFD)
0.00001uF / MFD 0.01nF 10pF (MMFD)
0.0000082uF / MFD 0.0082nF 8.2pF (MMFD)
0.000008uF / MFD 0.008nF 8pF (MMFD)
0.000007uF / MFD 0.007nF 7pF (MMFD)
0.0000068uF / MFD 0.0068nF 6.8pF (MMFD)
0.000006uF / MFD 0.006nF 6pF (MMFD)
0.0000056uF / MFD 0.0056nF 5.6pF (MMFD)
0.000005uF / MFD 0.005nF 5pF (MMFD)
0.0000047uF / MFD 0.0047nF 4.7pF (MMFD)
0.000004uF / MFD 0.004nF 4pF (MMFD)
0.0000039uF / MFD 0.0039nF 3.9pF (MMFD)
0.0000033uF / MFD 0.0033nF 3.3pF (MMFD)
0.000003uF / MFD 0.003nF 3pF (MMFD)
0.0000027uF / MFD 0.0027nF 2.7pF (MMFD)
0.0000025uF / MFD 0.0025nF 2.5pF (MMFD)
0.0000022uF / MFD 0.0022nF 2.2pF (MMFD)
0.000002uF / MFD 0.002nF 2pF (MMFD)
0.0000018uF / MFD 0.0018nF 1.8pF (MMFD)
0.0000015uF / MFD 0.0015nF 1.5pF (MMFD)
0.0000012uF / MFD 0.0012nF 1.2pF (MMFD)
0.000001uF / MFD 0.001nF 1pF (MMFD)
From the source Wikipedia: Capacitance, Self-capacitance, Mutual capacitance, Capacitance matrix, Stray capacitance, Capacitance of conductors with simple shapes, Energy storage From the source Khan
Academy: Capacitors and capacitance
|
{"url":"https://calculator-online.net/capacitance-calculator/","timestamp":"2024-11-04T04:51:17Z","content_type":"text/html","content_length":"81675","record_id":"<urn:uuid:92150321-768e-490a-8b4f-7b60bdfc9a8d>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00552.warc.gz"}
|
Notes on Nonlinear Dispersive Equations Workshop in Istanbul | James Colliander
Notes on Nonlinear Dispersive Equations Workshop in Istanbul
During last week’s NDE meeting in Istanbul, I experimented and took real-time notes in MultiMarkDown during the talks. If any of the speakers wish, I can post links to their presentations provided
they send me a copy of their slides. –J. Colliander
Almost sure well-posedness of the cubic nonlinear Schrödinger equation below $L^2({\mathbb{T}})$
I spoke about
recent work
Hiro Oh
. Here is a
link to my slides
Some conversations after my talk about Bourgain’s high/low frequency truncation method and refined bilinear estimate are amplified in the slides of my Warwick talk.
A review of some results on a class of nonlocal nonlinear wave-type Cauchy problems
A lot of the work here is inspired by the thesis of Nilay Duruk.
• Nonlocal Elasticity
• Examples
• Cauchy Problem
• Ongoing Studies
Nonlocal nonlinear equation
Take $\beta=\delta,$ the Dirac measure. The equation becomes a more standard nonlinear equation.
Different choices of $\beta$ lead to different equations.
Cauchy Problem
The results identify conditions, usually involving smoothing assumptions on $\beta$, under which they obtain local well-posedenss results.
Global results are also obtained provided there is appropriate control in $L^\infty$.
How to get the $L^\infty$ control? If the integral of the nonlinearity is bounded from below by $-k u^2$ then we have a global solution.
Ongoing Studies
Obvious generalizations….2d case and efforts to generalize to equations which are
in elasticity. We have also considered the
problem. This is a nonlocal generalization of the classical problems in elasticity which allows for tears and cracks.
Scattering? Small amplitude initial data? Travelling Waves?
OK, so I discussed this further with H. Erbay. There are some issues with the linear problem. Let
$\Delta*\beta$ be the Fourier multiplier operator given by $-ξ^2 \beta(\xi )$. This collapses to the Laplacian when $\beta=1$. For the usual wave operator we have inhomogeneous smoothing of order 1
by the usual denominator games. However, for the wave operator corresponding to $\Delta*\beta$ we have smoothing by division by $\xi \beta(\xi)$. If $\beta(\xi)$ decays like $\xi^{-r}$ with $r>2$ we
lose the smoothing property and have new troubles.
In the discussion after the talk, E. Titi asked what they would do on a bounded domain. In this case, the convolution operator used to express the dynamics on the spatial side does not make sense
near the boundary. Upon thinking about this a bit, it seems to me that a natural thing to do is to express the data in the basis of eigenfunctions of the Laplacian on the domain and then recast the
dynamics as a multiplier operator in that basis. The issues of the domain are addressed then by the eigenfunctions and the nonlocal aspects near the boundary are resolved.
The Cauchy problem for a class of two-dimensional nonlocal nonlinear wave equations governing anti-plane shear motions in elastic materials
(joint work with H. Erbay and A. Erkip)
• Two dimensional nonlocal equations
• LWP
• Conservation of Energy and Global Existence
• Blowup
Elasticity Motivates Study of Nonlocal Wave Equations
Deformation fields in an elastic body might be influenced by distant points. Therefore, we encounter nonlocal elasticity.
$$ w*{tt}=(\beta* F*{w*x})_x + (\beta *F*{w_y})_y)$$
$$0 \leq \hat{\beta}(\xi) \leq (1 + |\xi|^2)^{-r/2}$$
Taylor expansion in $\beta$ leads to higher order powers in $\xi$, which produces higher order derivative correction terms.
Cauchy Problem
Convert IVP into a Banach space valued ODE. Sobolev embedding. Algebra property of $H^s \cap L^\infty$. Convenient assumptions allow them to control the nonlinearity. (All this is done pointwise in
time and the regularity is quite high.)
(My impression is that ideas from (Kenig-Ponce-Vega, Indiana Math Journal, 1991 vol. 40 (1) pp. 33-69) could be used to prove Strichartz-type estimates adapted to this family of problems, under more
precise assumptions on the decay of $\hat{\beta}$.
Blowup Alternative expressed in terms of $| Dw |_{L^\infty}$.
Conservation of Energy and Global Existence
The energy involves a Fourier multiplier replacing the usual appearance of $\nabla$ in the kinetic energy. Under certain lower bound conditions on the potential energy, they can prove that a certain
norm is bounded for all time which in turn controls the blowup alternative norm $\|Dw\|_{\infty}$.
Nilay Duruk (Sabanci University)
Blow-up and global existence for a general class of nonlocal nonlinear coupled wave equations
(joint work with H. Erbay and A. Erkip, this is part of her thesis)
• Nonlocal Cauchy problem
• Local
• Global
• Blowup
Nonlocal Cauchy Problem
Nonlocal coupled system of two fields each of the flavor flavor
with assumptions that $$0≤\hat{\beta}(\xi)≤C(1+\xi^2)^{-r/2}.$$
Examples include certain coupled improved Boussinesq Equations. Such systems have been studied by Godefroy 1998 and Wang, Li (2009). The system models interaction of transverse waves in an elastic
GWP and Blowup for a coupled system with $\beta = e^{-|x|}$ is open.
Local Lipschitz continuity of the right hand side of the system using pointwise in time estimates coming from Sobolev control of $L^\infty$.
Define an operator $P$ which plays the role of $\nabla$ in the energy depending upon $\beta$.
The nonlinearity for the system is assumed to arise from a Lagrangian/Hamiltonian formulation. Thus, we have a conserved energy. We then prove some Sobolev style bounds adapted to $Pu$ generalizing
the case of $\nabla u$. With lower bounds on the potential energy, she obtains globalizing control.
Gronwall is used to show that the energy density stays bounded….
Adaptation of the virial identity (following Godefroy 1998) shows that negative energy solutions explode.
(This strikes me as something that could be explored further from the generalized virial identity point of view.)
I suggested that they look at the
KPV paper
and to try to imitate Morawetz-type calculations using the generalized virial identity.
G. M. Muslu (Istanbul Technical University)
The Cauchy problem for the one-dimensional nonlinear peridynamic model
(joint work with H. Erbay, A. Erkip, G. Muslu)
• Motivation
• Peridynamic Model
• LWP
• GWP
• Blowup
Need for a new theory of solid mechanics
For example, across a crack we have discontinuities across. We need a theory which replaces PDEs with integral equations. The peridynamic model was first proposed by Silling in 2000.
Peridynamic Model
Classical elasticity
$$\rho_0 u_tt = (f(u_x))_x$$
Peridynamic theory
$$\rho_0 u_tt = \int f(u(y,t) - u(x,t), y-x)dy$$
Newton’s third law demands that $f(η,ξ)=-f(-η,ξ)$. There are many studies on the modelling but there is relatively little mathematical analysis. Our aim is to study the nonlinear problem.
For convenience, we study $f(η,ξ)=β(ξ)g(η)$ where $\beta$ is even and $g$ is odd and $g(0)=0.$
Analysis in an appropriate space (pointwise in time tricks) leads to a LWP result. The treat the continuous and bounded case and the $C^1$ and bounded case. They also treat the $H^s \cap L^\infty$
case for all polynomial nonlinearities.
Energy Identity
$$E=\|u_t|^2+ \int \int \beta(y-x) w(u(y,t)-u(x,t)) dydx$$
Nice symmetrization tricks based on even/odd leading to energy identity.
Concavity result. Negative energy solutions blowup.
Gonca Aki (Weierstrass Institute, Berlin)
Thermal effects in gravitational Hartree systems
(part of Ph.D thesis, joint w. Jean Dolbeaut and Christof Sparber)
Intersted in self-gravitating quantum particles. We represent this by density matrix operator. We are given a total mass of the system which is the integration of the occupation numbers over all
occupation sites.
Energy is the kinetic energy of each state plus an interaction term described using the occupation number operators.
Variational problem corresponding to $H^1$ expressed in terms of the density matrices $\rho$.
The free energy is lower bounded by the kinetic energy using the Hardy-Littlewood-Sobolev inequality.
ack….the talk is dense and fast for me to keep up this way, maybe even without trying to type but I like this stuff!.
Defines a notion of maximal temperature, which could perhaps be infinity. Minimizers satisfy an EL equation so we know more about them.
Compensated compactness leads to proof of existence of minimizers. Obital stability follows. Positivity of critical temperature for all $M>0$. This extends a theorem of Lieb who showed the minimizer
was a pure state when $T=0$ to the setting of $T \in [0, T_c]$. They have also found an explicit expression of the value of $T_c$
Remarks for finite maximal temperature: For $\beta (s)=s^p$ with $p∈(1,7/5)$ , the maximal termperautre is finite.
Louis Jeanjean (Université de Franche-Comté, Besancon, France)
Stability and instability results for standing waves of quasi-linear Schrödinger equations
(joint with M. Colin and M. Squassina)
Nonlinearity 23 (2010), 1353-1385
Many other issues can be studied. Lots of open problems.
$$i\phi_t+ \Delta \phi + \phi \Delta |\phi|_2+f(|\phi|_2)\phi=0$$
Cauchy Problem
We will first address the Cauchy problem. Next, we will study the traveling waves and their stability.
Poppenberg, JDE 172 (2001) proved LWP in $H^\infty$.
New energy term: $\int|\phi|^2 |\nabla|\phi|^2 dx$
Cauchy problem is based on work of M. Colin, CPDE 27 (2002), 325-354. This is based on energy methods to overcome the loss of derivatives induced by the quasi-linear term, gauge transforms, ….
OPEN: Solve the local Cauchy problem under more general conditions on the nonlinearity and on the data. Look for global existence results.
Standing Waves
Ansatz: $$\phi\\_\omega= u\\_\omega (x) e^{-i\omega t}.$$
A calculation shows that
$$-\Delta u - u \Delta(|u|^2) + \omega u = |u|^{p-1}.$$
$$<m*\omega = \inf{ E*\omega (u): u is a nontrivial weak solution of the elliptic problem.}$$
The results identified a new critical threshold. We have $1+ \frac{4}{N}$ as usual but this problem also involves $3 + \frac{4}{N}$.
Conservation equations for long wave models and applications to undular bores
This is basically a modeling problem. There will not be a single proof in this. (joint work with Al Fati Ali and Magnar Bjorkavag)
Surface Waves
Assume the fluid is incompressible, inviscid, two dimensional, irrotational, assumption that the wave does not overturn.
Euler equations, some boundary conditions at surface and at bottom. The LWP problem has been solved in just the last 10 years or so. Numerically, this is a difficult problem. The problem is often
simplified by putting long wavelength or small amplitude assumptions.
Long wavelength gives the shallow water wave equations. Shallow water waves equation looks like a coupled system of Burger’s equations. There are an infinite number of conserved quantities in the
shallow water wave equations.
Small amplitude case is known as the Airy theory. Rewrite Euler in terms of the velocity potential but you still have the boundary conditions. The pressure is removed. Linearize the Bernoulli
equation on the boundary and calculate the dispersion formula by putting in plane waves. It emerges that $\omega^2=g k \tanh(h_0 k)$.
One can compare the dispersion relation for the KdV, BBM and water wave formulae. KdV is a bad model for water waves since
|
{"url":"https://colliand.com/post/notes-on-nonlinear-dispersive-equations-workshop-in-istanbul/","timestamp":"2024-11-10T19:06:12Z","content_type":"text/html","content_length":"30953","record_id":"<urn:uuid:6f0d7abc-b3e5-4155-b7cd-39f1ae91f146>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00832.warc.gz"}
|
2nd trimester groupwork #1 Group 7A
Download 2nd trimester groupwork #1 Group 7A
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
A solid conducting sphere of radius a is surrounded by a hollow conducting
shell of inner radius b and outer radius c as shown above. The sphere and the
shell each have a charge +Q, Express your answers to parts (a), (b) and (e) in
terms of Q, a, b, c, and the Coulomb's law constant.
a. Using Gauss's law, derive an expression for the electric field magnitude at a < r < b, where r is the
distance from the center of the solid sphere.
b. Write expressions for the electric field magnitude at r> c, b < r
< c, and r < a. Full credit will be given for statements of the
correct expressions. It is not necessary to show your work on
this part
c. On the axes below, sketch a graph of the electric field
magnitude E vs. distance r from the center of the solid sphere.
d. On the axes below, sketch a graph of potential V vs distance r from the center of the solid
sphere (The potential V is zero at r =∞)
e. Determine the Potential at r = b.
|
{"url":"https://studyres.com/doc/3123096/2nd-trimester-groupwork-%231-group-7a","timestamp":"2024-11-04T04:49:56Z","content_type":"text/html","content_length":"58359","record_id":"<urn:uuid:9f99558e-7fa9-4eb8-8daa-9a86b4823c96>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00259.warc.gz"}
|
New Formula for Geometric Stiffness Matrix Calculation
Received 4 June 2015; accepted 24 April 2016; published 27 April 2016
1. Introduction
Stress stiffening is an important source of stiffness and must be taken into account when analyzing structures. The standard formula for geometric stiffness matrices is introduced by a number of
authors, such as Zienkiewicz, Bathe, Cook, Belytschko, Simo, Hughes, Bonet, de Souza Neto and others [1] - [10] . The standard formula has been shown to be satisfactory in a large amount of cases,
though certain difficulties such as low accuracy, poor convergence rate and poor solution stability were discovered when solving problems that included the evaluation of extreme stress and strain
states. Some authors, e.g. Cook [4] , have suggested an improvement for bars and some authors dealt with nonlinear models describing large (finite) deformation (strain) behavior of materials and
structures [11] - [21] . However, as far as the authors know, no general solution to the problem has been suggested for a 2D or 3D continuum. Upon this ascertainment, thoughts arose concerning the
physical essence of geometric (or stress) stiffness and the formula for evaluating geometric stiffness matrices. As a result, a new formula for geometric stiffness matrix calculation is suggested.
The presentation of this new formula, which should substantially improve analysis of structures exposed to large strain, is the subject matter of this paper. In Section 2, the standard formula for
geometric stiffness matrices is presented. Section 3 shows the physical background of geometric stiffness based on equilibrium. In Section 4, the new, improved formula for geometric matrices is
introduced. The advantages of the new formula, including a substantially improved rate of convergence and stability, are demonstrated by examples in Section5. Conclusions are presented in Section 6.
2. The Standard Formula for Stress-Stiffness Matrices
Let us show the general calculation algorithm for the geometric stiffness matrix (sometimes also called the stress stiffness matrix or initial stress matrix) of an element in an updated Lagrangian
Let the following hold for each component
Let us define matrix
Let us define matrix
and matrix
The operator
Further, let us define matrix
If state of the stress is not negligible, the potential energy of the internal forces should be completed by the following term:
Then, the following formula for the geometric matrix of the element can be written:
Integration is carried out on the deformed body
The component of the matrix
or in indicial notation:
Similar formulae also hold for a total Lagrangian formulation, but the second Piola-Kirchhoff stress tensor is then used instead of the Cauchy stress, and integration is carried out on the undeformed
3. The Source of Geometric Stiffness―The Physical Background
Let us consider the truss member shown in Figure 1. Node 2 is loaded by the force F parallel to the x axis and sliding in the same direction. The equilibrium equation in the x direction in node 2 can
be written as follows
This formula is independent of any strain measure or pertinent constitutive relations. It can be seen that stiffness
Let us show a derivation of a formula for geometric stiffness matrix of a truss member (see Figure 2) in a finite element formulation and let us start with a simple derivation based on equilibrium
Figure 1. Truss member in an arbitrary position in 2D.
Figure 2. Truss member: the x axis is the axis of the rod in its original position.
A geometric (stress) stiffness matrix can be obtained by an equilibrium condition when only the initial stress state and pertinent infinitesimal nodal displacement for each row of the matrix is taken
into account. Such a definition of a geometric stiffness matrix is independent of the strain tensor chosen.
To simplify the following derivations let’s introduce both, the coordinates
Let the vector of the nodal displacements of the element be
Note that the truss element has no lateral material stiffness.
In general, arbitrary term of a stiffness matrix
The moment equilibrium condition can be written as follows:
For the infinitesimal angle
When introducing a displacement
From equilibrium equations and symmetry of the stiffness matrix it is easy to determine the other coefficients of the geometric stiffness matrix, particularly
The same formula corresponds with Formula (12) and is presented also by Cook in [4] , the same as many other authors. The geometric stiffness matrix for a truss member can also be derived from the
principle of virtual work, which will be described later. Then a strain measure and constitutive law must be introduced, which is not applicable for a rigid truss, where geometric stiffness also
The resulting tangent stiffness matrix
[ ]
When applying the general standard algorithm for geometric stiffness matrices to the truss element in question, we obtain:
Substituting in the formulae
the formula for the geometric stiffness matrix reads:
This geometric stiffness matrix differs from that in Formula (18) and introduces also an axial stiffening. But no reason was found by the authors for concluding that normal force had led to a change
in the axial stiffness of the element. So let us derive the geometric stiffness matrix of a truss element in a more undisputable way based on the principle of virtual work.
With deformation restricted to the
For truss the principle of virtual work becomes
where ^ Piola-Kirchhoff stress in the
and the linearized equation of the principle of virtual work (virtual displacement) simplifies to:
Assuming we obtain
Then the equation of the principle of virtual work can be written as follows:
After transformation into global coordinate system
and after elimination of the vector of virtual displacements we get:
The geometric stiffness matrix (45) is the same as that obtained by use the standard Formula (27) and the first row of the matrix does not correspond with Formula (12). Let us try to derive the
geometric stiffness matrix of a truss element using a more accurate strain measure.
The approximate nature of the linear relation between the deformation and displacement can be shown on a fibre of initial lengthFigure 3).
Let us denote by the vector of displacement of the starting point of the fibre. The end-point of the fibre will be displaced by vector
Using the formula for the body-diagonal of a cuboid with dimensions
Introducing stretch
we obtain the following relation for stretch of the fibre:
Let us consider the binomial theorem:
and let us take into account only the first two terms. Then we can write:
and for
If we want to be more accurate and take into account three terms of the binomial expansion, and if we neglect the third and higher powers of the derivatives of the displacement components, we get a
more accurate expression for the stretch:
and hence
For a 1D problem, therefore, this more accurate expression would be identical to the formula for
Using the more accurate strain measure we obtain:
where is defined a new matrix
The linearized equation of the principle of virtual work (virtual displacement) modifies to:
After transformation into global coordinate system and elimination of the vector of virtual displacements we get different geometric stiffness matrix in the rotated and thus also in global coordinate
Resulting stiffness matrix
It can be seen that the standard formula has produced a different geometric matrix for the 2D truss element (27) than Formulae (18), (12) and (61) derived earlier and theoretically unjustified
geometric axial stiffness was also produced. This formula would lead to a poor convergence rate, inaccuracy and even, in the case of extreme compression, to singularity. E.g. for^nd iteration it
would be 1/4, and in the i-th iteration the unbalanced force of
To obtain the same geometric stiffness matrix for the 2D truss element (18) as was derived above from the equilibrium, the influence of the member
4. An Improved Formula for a Geometric Stiffness Matrix
Introducing a fibre of constant cross section area A in the direction
To evaluate the first part of the expression, a strain measure and pertinent constitutive relation must be chosen. This part represents material stiffness. The second part, which is the matter of our
interest, represents geometric stiffness.
The contributions of the two remaining principal stresses to the stiffness could be derived in a similar manner.
Let us introduce the infinitesimal volume element of continuum
It was earlier shown that for a rod (see formula (12)) the first derivative of a displacement component with respect to the same direction does not generate geometric stiffness. For the 2D or 3D
continuum a similar formula to (60) can be derived in a similar way as in the case of a rod.
New measure of deformation
The linearized equation of the principle of virtual work (virtual displacement) for 2D or 3D continuum is similar to (32)
yielding its following form in terms of finite element matrices:
The difference from the standard formula (8) lies in the fact that in the
A particular case where the standard formula was applied to a 2D truss element producing an unintentional change in the axial stiffness was presented earlier. This phenomenon can also be generally
observed when the standard formula is used. It is clear that the uniaxial stress state will provide the same result regardless of the way it is modeled, i.e. a truss member modeled as a 3D solid
should provide the same result as one modeled by a truss member or by shell elements. To guarantee this and to improve the influence of the stress state on stiffness,
the members
stress component should influence the stiffness in the same direction. To ensure objectivity (independence from any arbitrary coordinate system) of the geometric stiffness matrix, the omission of the
above mentioned terms must be evaluated in the principal stress axes
The component of the matrix
To obtain the geometric stiffness matrix in the global coordinate system the following transformation must be performed:
The transformation from the global coordinate system
Then, the relations between the first derivatives of the base functions and stresses are the following:
Illustration on a Quadrilateral Plane Stress Element
For a quadrilateral plane stress element the following can be obtained:
where C = cos(α); S = sin(α); α is the angle between principal and global directions; t is the element thickness and A is the area of the element.
A similar formula also holds for the total Lagrangian formulation for such an element, but the second Piola- Kirchhoff stress tensor is then used instead of the Cauchy stress, integration is carried
out on the undeformed body
5. Examples
An application of the new formula for the geometric stiffness matrix for large strain was demonstrated on the example of a unit cube represented by different computational models (rod, shell, solid
elements) with different orientations in space (see Figure 4) assuming isotropic hyperelastic material with linear relation between the
Figure 4. Different computational models (rod, shell, solid elements) with different orientations in space.
Figure 5. Magnitude of displacement due touniform normal load of the value E acting on two opposite sides.
Figure 6. Magnitude of displacement due touniform normal load of the value-E acting on two opposite sides.
logarithmic strain and Cauchy stress tensors. Let E be the Young modulus and for simplicity let us assume zero Poisson ratio. The cube was exposed to uniform stress of the magnitude E or −E normal on
two opposite sides. A logarithmic strain value of 1 or −1 and the prolongation or shortening of the value of Figure 5 and Figure 6, which are graphical outputs from the RFEM program, it is shown that
practically exact results were obtained for all computational models and orientations.
5.1. Convergence of the Standard and New Approach―A Comparison
Let the sequence
then p is called the order of convergence of the sequence. The constant
If p is large, the sequence
5.2. The Standard Approach
Figure 7 shows that the standard approach provides only linear convergence for large strain.
5.3. The New Approach
Figure 8 shows that the new approach yields the quadratic convergence even for very large strain.
The numerical solution of the presented example has shown that to reach a sufficiently good result using the standard formula (ANSYS etc.) 15 iterations were needed whereas using the improved
approach presented in this paper (RFEM) only 5 iterations were needed to obtain the same precision.
6. Conclusions
The present formula for a geometric stiffness matrix, which has been published in many books, is widely utilized, objective and simply defined. However, stability and convergence problems occur when
analyzing large strains, or, what is more important in practice, in a case of yielding. If the yield criterion is satisfied, then the
material stiffness decreases substantially. The stress state remains high and in case of compression the tangent stiffness in the direction of the compression can become negative even with a small
This is caused by a theoretically unsupported change in pressure stiffness in the direction of compression produced by the standard formula. This results in a correspondingly high nodal force
unbalance, poor convergence and possibly also instability. The origin of the problem arises from the approximation of strain, in which only the first two terms of the binomial series are applied.
The presented algorithm is slightly more complicated, but remains objective and provides a solution with increased stability, a higher rate of convergence in the case of a large strain, or plastic
yielding, and improved accuracy over the present formula. In case of very large strain, the number of iterations needed could be several times less using the new formula comparing to the standard
formula. In many cases the new formula can even provide solutions in cases where the standard formula has failed. This new formula for a geometric matrix has been implemented in the RFEM program and
has been demonstrated to be much more stable and faster than the standard formula.
This outcome has been achieved with the financial support of the Czech Science Foundation (GACR) project 14-25320S “Aspects of the use of complex non linear material models”.
List of Variables
[]Shape functions
[][ ]Rotation tensor
[][ ]Second Piola-Kirchhoff stress
^Internal and external virtual work
[][ ]Displacement field
[]Spatial (Eulerian) coordinates
[][ ]Coordinates in principal aces
^*Corresponding author.
|
{"url":"https://scirp.org/journal/paperinformation?paperid=65967","timestamp":"2024-11-10T05:20:29Z","content_type":"application/xhtml+xml","content_length":"144284","record_id":"<urn:uuid:520b17ea-9fec-45b5-9987-f4c6ac9d0616>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00180.warc.gz"}
|
Bogolyubov Institute for Theoretical Physics
• Laboratory of Astrophysics and Cosmology
• Laborarory of Theory of Integrable Systems
• Laboratory of Strongly-correlated Low Dimensional Systems
Laboratory of Biophysics of Macromolecules
• Laboratory of Mathematical Modelling
• Laborarory of Structure of Atomic Nuclei
Olga V. Ugryumova
Department of Computer Maintenance
Position: Leading Engineer
Constantin V. Usenko
Department of Synergetics
Position: Senior Researcher
const.usenko@gmail.com, usenko@univ.kiev.ua
|
{"url":"https://bitp.kiev.ua/en/staff/letter/20","timestamp":"2024-11-08T09:11:43Z","content_type":"text/html","content_length":"8295","record_id":"<urn:uuid:ea8e9f41-5012-44b6-9de1-660b8ce8c4fd>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00653.warc.gz"}
|
Actuarial Statistics
At the end of this course the student will have acquired the knowledge, skills and competences that will allow him:
- To be able to create distributions through core operations
- To know the main distributions and their relationships used for the insurer’s claims
- To analyze the tail of a distribution and know how to classify it
- To be able to understand the basic concepts of extreme theory applied to actuarial sciences
- To know how to use statistical techniques for typical data concerning the number and amount of claims
General characterization
Responsible teacher
Rui Manuel Rodrigues Cardoso
Weekly - 4
Total - 70
Teaching language
Mathematical Analysis I, II. Probability and Statistics at a medium level.
- Bowers, Newton, Gerber, Hickman, Jones and Nesbitt. (1997) Actuarial Mathematics (second edition). Itasca,
Illinois: The Society of Actuaries.
- Dickson, D.C.M., Hardy, M.R. & Waters, H.R. (2020) Actuarial Mathematics for Life Contingent Risks, (third edition). Cambridge
University Press.
- Kaas, R., Goovaerts, M., Dhaene, J. & Denuit, M. (2008) Modern Actuarial Risk Theory - using R (second
edition), Springer.
- Klugman, S. A., Panjer, H. H. & Willmot, G. E. (2019) Loss Models: From Data To Decisions (fifth edition),
- Klugman, S. A., Panjer, H. H. & Willmot, G. E. (2012) Loss Models: Further Topics,
Teaching method
In the theoretical and practical lectures, it will be explained and discussed the successive topics of the course program. The topics are introduced by the teacher, consolidated using as much as
possible with real examples drawn from the insurance industry, followed by a brief discussion and use of computational means to support problem solving.
Evaluation method
Two tests, to be carried out during the academic period and exam (s) according to the academic calendar.
Each of the tests has a weight of 50% for the calculation of the final grade, being exempt from examination the student who has a weighted average greater than or equal to 9.5, with both tests
evaluated at least 7.5 values.
Subject matter
1 Sum of independent random variables
1.1 Some results
1.2 Convolutions
2 Creating new distributions
2.1 Multiplication by a constant
2.2 Raising to a power and exponentiation
2.3 Mixing
2.4 Splicing
3 Distribution families
3.1 Parametric families
3.2 Limiting distributions
3.3 Relationships between distributions
3.4 Exponential family
4 Tails of distributions
4.1 Classification
4.2 Equilibrium distribution
4.3 Tail behavior
5 Extreme value distributions
5.1 Distribution of the maximum
5.2 Maximum domain of attraction
5.3 Generalized Pareto distribution
5.4 Limiting distributions of excesses
6 Estimation
6.1 Kaplan-Meier estimator
6.2 Nelson-Aalen estimator
6.3 Kernel density models
6.4 Estimation to complete data
6.5 Estimation to modified data
6.6 Estimation to truncated data
Programs where the course is taught:
|
{"url":"https://guia.unl.pt/en/2023/fct/program/1120/course/12232","timestamp":"2024-11-07T12:43:07Z","content_type":"text/html","content_length":"21466","record_id":"<urn:uuid:d1501041-d99c-4195-a201-47852446f4ff>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00732.warc.gz"}
|
Who provides guidance on data analysis for fluid mechanics assignments? | Hire Someone To Do Mechanical Engineering Assignment
Who provides guidance on data analysis for fluid mechanics assignments? What is available on data/instructions? If not is there anything to know? What about other functions of you? How can you apply
guidelines and tasks? Sketch of fluid/force balance Evaluation In order to evaluate what fluid mechanics use it must be done primarily within a static model solution or in a different phase of the
fluid physics investigation. This component cannot be directly automated with force point the position of the fluid during measurement of force. It is not possible to precisely quantify the role the
fluid plays in measuring the force observed. It is also responsible for predicting the output force of the fluid applied during elevation, and of force applied during fluid separation when there is
pressure required by the fluid. This is the only thing that could possibly be done in order to determine force there (specific I have not stated it). If needed obtain field test The fluid is moving
through the interior region of the chamber. This is not the geometry, but the direction of the forces associated with the process because of the gravitational force caused by the fluid under the
pressure. There are also contributions to the frictional forces when this flows through the elevator. This is with what a fluid has versus how big the fluid is. In this fluid mechanism assessed; the
forces associated with the time of deployment of the operating units; or with the forces applied to the outlet holes and openings (e.g. the cylinder ring) of the valve housing; in a constionized
model, it is also what the fluid is and what we evaluate or measure. The fluid is at the maximum force caused by the fluid. It is apparent that when this occurs through the input to the fluid that
has already been measured In any case, are there any otherWho provides guidance on data analysis for fluid mechanics assignments? With a database of 100 million files on the Dikeman system,
researchers have learned the difference between the real-time and simulation-based models of fluid mechanics (or fluid dynamics in general) The learning curve for simulation-based model was very low,
but the real-time model (Byrd-Dikeman) has the highest load capacity (L·/d) during the simulation. Data extrapolation techniques were used to extrapolate the data (with minor adjustment to model
parameters) to the real-time models in the simulations Using additional info models from above, two approaches were identified: 1) model extrapolation technique and 2) simulation technique. In order
to quantify the load-capacity changes of each approach, a number of parameters were selected: Residual simulation stress $\sigma$ due to a particle moving at a fixed speed Numerical time-series
density $\rho$ in which the particle can move at a fixed speed Discover More Here $\rho_{mn}(V_p, \dots) = \delta_m \delta_n$. Substituting $\rho_{mn}(V_p, \dots)$ for $V_p \rho_{mn}(V_p, \dots)$
gives: $\rho_{mn}(V_p, \dots)$ is the observed simulation stress, which can be calculated using $\rho_{mn}(V_p, 0)$. The simulation data provided the next values to extrapolate to the real-time
model. Data extrapolation provided the number three: $\tilde{\rho}_{mn}$: Mean moment velocity inside a cylinder ($0 < V_c < L = \delta_m \delta_n$) $Who provides guidance on data analysis for fluid
mechanics assignments? The use of data analytics in data analysis is expanding now. Learn more in our post on "Concrete Data for Monitoring", which may turn up the "Data for Monitoring" episode as
well - check it out here.
Paid Homework Help
How much do you expect the use of data analytics to give for a fluid mechanics assignment? Which data analysis task you expect? Do you expect more help from an outside thinker, or an outside
observer? The answer? No. In fact, the answers to these questions may fall into several categories. Let’s start out by looking at what you expect data analysis to give for a fluid mechanics
assignment. Practical First, you might think something different: Are fluid mechanics assignments a question posed by a non professional company? Where does this go? If this is a really tricky one
for you, you may be surprised. Maybe you’d prefer an outsider thinker from a company with decent technical background and some experience. Or you may experience more time spent working with
professional engineers because you’re currently involved with a company. Not so. If a data analytics user looks into an organization and puts some understanding on how to make a set of small
adjustments that can be useful in an efficient fluid mechanics assignment, the need for a personal observer to work with you may just go along. Second, in look at these guys you may want to ask about
how data analysis can be done safely for a fluid mechanics assignment. How should I handle my data analysis, given what I saw happening, how I handled my non-product data from a commercial company
and then the results of those adjustments to the data? Some time ago I wrote a post about moving from having more or less data analyzed in my email as a data analytics user. Here they’re listing some
of the current positions: For the most part, the data analysis / statistical approaches outlined in the above mentioned posts are straightforward and easy to
|
{"url":"https://mechanicalassignments.com/who-provides-guidance-on-data-analysis-for-fluid-mechanics-assignments","timestamp":"2024-11-10T15:52:00Z","content_type":"text/html","content_length":"130949","record_id":"<urn:uuid:c539a817-f843-4fd0-8ab9-6152adac961b>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00443.warc.gz"}
|
Pet World – My Animal Hospital – Dream Jobs: Vet – Appar på
But then after some thought I was able to make a DFA, which means that this Language L should be regular.By making a pentagon with edges having 3 and self loops of 5 on each corner.(Can't post the
image). Start state as it's final state. Now I don't know what's wrong in my Pumping lemma proof. Pumping Lemma If A is a regular language, then there is a number p (the pumping length), where, if x
is any string in A of length at least p, then s may be divided into three pieces, s=xyz, satisfying the following conditions: 2 1. For each i ≥ 0, xyiz ∈ A, 2. y≠ є, and Notes on Pumping Lemma Finite
Automata Theory and Formal Languages { TMV027/DIT321 Ana Bove, March 5th 2018 In the course we see two di erent versions of the Pumping lemmas, one for regular languages and one for context-free
Question No. 17 umping lemma is a necessary condition for regular languages (Vi > O)xycz e L)/\ (Iyl > (this is why "pumping" If L is a regular language, then there is a number p pumping length) sot,
(Vs e L) Isl > p (s = (Proof of the pumping lemma: Sipser's book p, 78) To prove that a given language, L, is not regular, we use the Pumping Lemma as follows . 1. We use a proof by contradiction. 2.
We assume that L is regular.
Total 9 Questions have been asked from Regular and Contex-Free Languages, Pumping Lemma topic of Theory of Computation subject in previous GATE papers. Average marks 1.44 . Question No. 17 umping
lemma is a necessary condition for regular languages (Vi > O)xycz e L)/\ (Iyl > (this is why "pumping" If L is a regular language, then there is a number p pumping length) sot, (Vs e L) Isl > p
(s = (Proof of the pumping lemma: Sipser's book p, 78) To prove that a given language, L, is not regular, we use the Pumping Lemma as follows .
Pumping lemma is usually used on infinite languages, i.e. languages that contain infinite number of word.
1 Introduction. The regular languages and finite Aug 18, 2013 Take the regular language L, and express it as a deterministic finite automaton with p states.
Pumping lemma for regular language. 0. How to prove that an even palindrome is not regular using pumping lemma? A language L satisfies the pumping lemma for regular languages and also the pumping
lemma for context free languages.Which of the following statements about L is true ? A. L is necessarily a regular language.
Smart stress ball
It must be recognized by a DFA. 4. That DFA must have a pumping constant N 5. We carefully choose a string longer than N (so the lemma holds) 6. Total 9 Questions have been asked from Regular and
Contex-Free Languages, Pumping Lemma topic of Theory of Computation subject in previous GATE papers. Average marks 1.44 . Question No. 17 Pumping Lemma If A is a regular language, then there is a no.
p at least p, s may be divided into three pieces x,y,z, s = xyz, such that all of the following hold: Complete Pumping Lemma for Regular Languages Computer Science Engineering (CSE) Notes | EduRev
chapter (including extra questions, long questions, short questions, mcq) can be found on EduRev, you can check out Computer Science Engineering (CSE) lecture & lessons summary in the same course for
Computer Science Engineering (CSE) Syllabus.
Let Lbe a regular language(a.k.a. type 3 language). The pumping lemma: o cial form The pumping lemma basically summarizes what we’ve just said. Pumping Lemma. Suppose L is a regular language. Then L
has the following property.
Isocyanater miljöpåverkan
Answer: The lemma states that for a regular language every string can be broken into three parts x,y and z such that if we repeat y i times between x and z then the resulting string is also present
in L. The pumping lemma is extremely useful in proving that certain sets are non-regular. 2. Non-Regular Languages and The Pumping Lemma Non-Regular Languages! • Not every language is a regular
For each i 0 xy z in L.i > > 1. for each i i 2. |y| > 0 Pumping Lemma Example 0} n n L = { 0 1 | n is not regular. Suppose L were regular. Full Course on TOC: https://www.youtube.com/playlist?list=
PLxCzCOWd7aiFM9Lj5G9G_76adtyb4ef7i Membership:https://www.youtube.com/channel/UCJihyK0A38SZ6SdJirE Pumping lemma holds true for a language of balanced parentheses (which is still non regular): It is
always possible to find a substring of balanced parentheses inside any string of balanced parenthesis.
Nrs vas vrs fps
lövsta bruk värdshuslogista investor relationsvad händer med födan genom mag och tarmkanalensmart asthma inhalerhrm hrclean green protein
Pumping Lemma for Regular Languages? How would I go about using the pumping lemma to prove that a language L is not regular? A question I'm struggling with, is where the language L is given as: L :=
{a n b 2n | n ≥ 0} though I'd appreciate hints that might help me understand the lemma … Full Course on TOC: https://www.youtube.com/playlist?list=PLxCzCOWd7aiFM9Lj5G9G_76adtyb4ef7i Membership:https:
//www.youtube.com/channel/UCJihyK0A38SZ6SdJirE 1996-02-18 The usual pumping lemma gives only a necessary condition for a language to be regular, but there are more powerful versions giving necessary
and sufficient conditions, using "block pumping properties". A. Ehrenfeucht, R. Parikh, and G. Rozenberg, Pumping lemmas for regular … Are you worried about the answers to Theoretical Computer
Science questions :Regular Language - Pumping lemma, Closure properties of regular Languages? We have arranged the Show Answer button under the each question.
|
{"url":"https://enklapengarqvik.netlify.app/99757/95978.html","timestamp":"2024-11-06T01:28:53Z","content_type":"text/html","content_length":"16691","record_id":"<urn:uuid:80dde157-6092-4c0a-a7ed-acddf7dbbee7>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00697.warc.gz"}
|
Surface Area of a Rectangular Prism - Math Guide
Add the six areas together.
Total surface area: 14x+14x+84+84+6x+6x = 40x+168
Since the final surface area is 328 \, cm^2, you can use the equation to solve for the missing side:
The missing side length is the value for x that makes the equation true.
One way to solve is by using substitution.
Letβ s solve the equation for x=2.
\begin{aligned} & 40 \times 2+168=328 \\\\ & 80+168=328 \\\\ & 248=328 \end{aligned}
This is NOT true, so x β 2.
Since x=2 was too small, letβ s try x=4.
\begin{aligned} & 40 \times 4+168=328 \\\\ & 160+168=328 \\\\ & 328=328 \end{aligned}
This is true, so x=4.
What is a cuboid?
It is the name for a three-dimensional shape that is made up of rectangles and/or squares. A cuboid is another name for rectangular prisms.
What is the difference between a rectangular prism and a rectangular pyramid?
They both have a rectangular base, but a pyramid has one base and triangular lateral faces that meet in a point. A rectangular prism has two bases and rectangular lateral faces.
Is there a surface area of a rectangular prism formula?
Yes, the general formula is 2(lw+wh+hl).
How is the volume of a rectangular prism calculated?
The volume can be calculated with the formula V=l \times w \times h.
342 \mathrm{~cm}^{2}
315 \mathrm{~cm}^{2}
222 \mathrm{~cm}^{2}
462 \mathrm{~cm}^{2}
96 \mathrm{~inches}^{2}
256 \mathrm{~inches}^{2}
247.5 \mathrm{~ft}^2
495.18 \mathrm{~ft}^2
371.34 \mathrm{~ft}^2
185.7 \mathrm{~ft}^2
4.2 \mathrm{~cm}^{2}
12,400 \mathrm{~cm}^{2}
8,900 \mathrm{~cm}^{2}
6.76 \mathrm{~cm}
14.2 \mathrm{~cm}
20.29 \mathrm{~cm}
5 \mathrm{~cm}
133 \text { inches}^2
504 \text { inches}^2
371 \text { inches}^2
806 \text { inches}^2
|
{"url":"https://thirdspacelearning.com/us/math-resources/topic-guides/geometry/surface-area-of-a-rectangular-prism/","timestamp":"2024-11-13T21:29:50Z","content_type":"text/html","content_length":"278495","record_id":"<urn:uuid:43a40227-c7f9-46fa-8943-4d58c2cc58df>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00239.warc.gz"}
|
Jensen-Shannon divergence: Everything you need to know about this ML model
The Jensen-Shannon divergence is used to measure the similarity between two probability distributions, particularly in the field of Machine Learning. Find out everything you need to know about this
measure, from its history to its modern applications!
During the 20th century, the Danish mathematician Johan Jensen and the American statistician Peter Shannon made major contributions to information theory and statistics.
Born in 1859, Johan Jensen devoted much of his career to the study of convex functions and inequalities. In 1906 he published an article entitled “On convex functions and inequalities between mean
In it, he introduced the notion of convexity, and established several results on Jensen’s inequalities that bear his name. These are mathematical results describing the properties of convex
Peter Shannon was born in 1917 and for a long time studied measures of divergence between probability distributions. In particular, he worked on problems related to density estimation.
In the 1940s, several decades after Jensen’s work, the American developed methods for estimating the divergence between probability distributions.
It was based on the Kullback-Leibler divergence: a measure invented in the 1950s by Solomon Kulback and Richard Leibler, widely used to quantify the difference between two distributions.
It measures the dissimilarity between two probability distributions based on the logarithms of the probability ratios.
Later, in the 1990s, researchers began to explore possible extensions and variations of the Kullback-Leibler divergence.
Their aim was to take better account of symmetry and dissimilarity between distributions. They drew on the pioneering work of Johan Jensen and Peter Shannon to create the Jensen-Shannon divergence.
What is the Jensen-Shannon divergence?
Jensen-Shannon divergence was first introduced in an article by Barry E. S. Lindgren in 1991, entitled “Some Properties of Jensen-Shannon Divergence and Mutual Information”.
He developed this metric as a measure of symmetric divergence between two probability distributions. Its main difference from the Kullback-Leibler divergence on which it is based is its symmetry.
It takes the weighted average of two KL divergences. One is calculated from the first distribution and the other from the second.
The Jensen-Shannon divergence can therefore be defined as the weighted average of the Kullback-Leibler divergences between each distribution and an average distribution.
How do you calculate the Jensen-Shannon divergence?
The first step in calculating the Jensen-Shannon divergence is to pre-process the data to obtain the probability distributions P and Q.
The probability distributions can then be estimated from the data. For example, it is possible to count the occurrences of each element in the sample.
When the distributions are available, the divergence can be calculated using the formula :
JS(P || Q) = (KL(P || M) + KL(Q || M)) / 2
where M = (P + Q) / 2 and || represents the concatenation operator
A higher value of JS divergence indicates greater dissimilarity between distributions, while a value closer to zero indicates greater similarity.
To illustrate with a concrete example, suppose we have two texts and we want to measure their similarity.
Each text can be represented by a distribution of words, where each word is an element of the alphabet.
By counting the occurrences of words in each text and normalising these occurrences by the frequency of the total words, we obtain the probability distributions P and Q.
We then use the Jensen-Shannon divergence formula to calculate a value that indicates how similar the two texts are!
There are several important properties of this measure. Firstly, it is always positive and reaches zero if and only if the distributions P and Q are identical.
In addition, it is upper bounded by log2(n), where n is the size of the alphabet of the distribution. It is statistically significant and can be used in hypothesis testing and confidence intervals.
Advantages and disadvantages
The strength of the Jensen-Shannon divergence is that it takes into account the overall structure of the distributions. It is therefore more resistant to local variations than other divergence
It is relatively efficient to calculate, which also makes it applicable to large quantities of data. These are its main advantages.
On the other hand, it can be sensitive to sample size. Estimates of probability distributions can be unreliable when sample sizes are small, and this can affect the similarity measure.
It may also be less suitable when the distributions are very different. This is because it does not capture the fine details of local differences.
Jensen-Shannon divergence and Machine Learning
JS divergence plays a crucial role in Machine Learning. It measures the similarity between the probability distributions associated with different samples or clusters.
It can be used to group similar data together or to classify new samples by comparing them with reference distributions.
In natural language processing, it can be used to compare word distributions in different texts. This can be used to identify similar documents, detect duplicate content or find semantic
relationships between texts.
It is also a tool for evaluating language models. In particular, it can be used to assess the diversity and quality of the texts generated.
By comparing the probability distributions of the generated texts with those of the reference texts, it is possible to measure the extent to which the generations are similar to or different from the
reference corpus.
In cases where the training and test data come from different distributions, the Jensen-Shannon divergence can be used to guide domain adaptation strategies.
This can help to adjust a model trained on a source distribution to better fit new data from a target distribution.
Finally, for sentiment analysis, JS divergence can be used to compare profiles between different documents or sample classes.
This allows similarities and differences in expression to be identified, for example for opinion detection or emotion classification.
Jensen-Shannon divergence and Data Science
For data science, JS divergence is used to compare the similarity between distributions of variables or characteristics in a data set.
It can be used to measure the difference between observed data distributions and expected or reference distributions.
This allows variations and discrepancies between different distributions to be identified, which can be valuable for detecting anomalies or validating hypotheses.
For textual data analysis, this measure can be used to estimate the similarity between distributions of words, phrases or themes in documents.
This can help to group similar documents, extract common topics or detect significant differences between sets of documents.
For example, it can be used to classify documents based on their content or for sentiment analysis by comparing sentiment distributions between different texts.
When there is high dimensionality in the data, Jensen-Shannon divergence can be used to select the most discriminating features or to reduce the dimensionality of the data.
By calculating the divergence between the distributions of different characteristics, it is possible to identify those that contribute most to the differentiation between classes or groups of data.
Model evaluation: In the process of developing and evaluating predictive models, JS divergence can be used as a metric to compare the probability distributions of predictions and actual values.
This makes it possible to assess the quality of the model by measuring how closely the predictions match the actual observations. For example, it can be used to evaluate classification, regression or
recommendation models.
Finally, it can be used to measure the similarity between observations or instances in a dataset.
By comparing the distributions of features between different instances, it is possible to determine the proximity or distance between them. This can be used in clustering tasks to group similar
observations or to perform similarity searches in large databases.
Conclusion: Jensen-Shannon divergence, a key tool for data analysis and machine learning
Since its creation, the Jensen-Shannon divergence has been widely used in many fields, including computer science, statistics, natural language processing, bioinformatics and machine learning.
It is still an essential tool for measuring similarity between probability distributions, and has opened up new perspectives in data analysis and statistical modelling.
Researchers around the world use it to solve classification and clustering problems. It is a key element in the toolbox of scientists and practitioners in many fields, starting with Data Science.
To learn how to master all the techniques of analysis and Machine Learning, you can choose DataScientest.
Our courses will give you all the skills you need to become a data engineer, analyst, data scientist, data product manager or ML engineer.
You’ll learn about the Python language and its libraries, DataViz tools, Business Intelligence solutions and databases.
All our courses are entirely distance learning, lead to professional certification and are eligible for funding options. Discover DataScientest!
|
{"url":"https://datascientest.com/en/jensen-shannon-divergence-everything-you-need-to-know-about-this-ml-model","timestamp":"2024-11-10T20:37:41Z","content_type":"text/html","content_length":"440021","record_id":"<urn:uuid:8985013a-45d5-4772-9f42-ad9ea1a5ebbc>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00364.warc.gz"}
|
Accuracy and Precision: Definition, Examples
Design of Experiments > Accuracy and Precision
In any experiment, it is impossible to achieve perfect measurements (even the best atomic clock isn’t flawless: it loses a second every 300 billion years)[1]. The degree to which your measurement
deviates from the true value is called accuracy.
Another concept closely related to accuracy is precision, which describes the quality of your measurements.
Accuracy is how close you are to the true value or theoretical value. For example, let’s say you know your true height is exactly 5’9″.
• You measure yourself with a yardstick and get 5’0″. Your measurement is not accurate.
• You measure yourself again with a laser yardstick and get 5’9″. Your measurement is accurate.
Note: The true value is sometimes called the theoretical value.
Precision is how close two or more measurements are to each other. If you consistently measure your height as 5’0″ with a yardstick, your measurements are precise.
Accuracy of Analysis and Precision Together
If you are precise, that doesn’t necessarily mean you are accurate. However, if you are consistently accurate, you are also precise.
“More” Precise
If you want to tell which set of data is more precise, find the range (the difference between the highest and lowest scores). For example, let’s say you had the following two sets of data:
• Sample A: 32.56, 32.55, 32.48, 32.49, 32.48.
• Sample B: 15.38, 15.37, 15.36, 15.33, 15.32.
Subtract the lowest data point from the highest:
• Sample A: 32.56 – 32.48 = .08.
• Sample B: 15.38 – 15.32 = .06.
Sample B has the lowest range (.06) and so is the more precise.
More Examples
While accuracy is “how close to the mark,” precision is “how close measurements are together.” If you measure once and get the true value, you’re accurate. If you consistently measure the true value
over repeated measurements, you are precise.
• Accurate and precise: If a weather thermometer reads 75^oF outside and it really is 75^oF, the thermometer is accurate. If the thermometer consistently registers the exact temperature for several
days in a row, the thermometer is also precise.
• Precise, but not accurate: A refrigerator thermometer is read ten times and registers degrees Celsius as: 39.1, 39.4, 39.1, 39.2, 39.1, 39.2, 39.1, 39.1, 39.4, and 39.1. However, the real
temperature inside the refrigerator is 37 degrees C. The thermometer isn’t accurate (it’s almost two degrees off the true value), but as the numbers are all close to 39.2, it is precise.
Why accuracy in statistics is important
Accuracy in statistics is important because it helps us draw good conclusions from data. Inaccurate data leads to incorrect conclusions, which could result in poor decisions with adverse
consequences. While it might not be important if you aren’t accurate when measuring your own weight, mismeasurement of weight in a clinical setting could result in serious consequences.
Several factors can have an impact on data accuracy, including:
• Poor data collection methods.
• Applying incorrect statistical methods, such as using a chi-square test on data that isn’t random,.
• Misinterpreting results.
To boost data accuracy, follow these good practices:
• Use the right sampling method for your design, gather data from representative samples if possible (although sometimes you might be forced to use a less rigid method such as convenience sampling
), and be aware of any bias.
• Use suitable tests for the data and adhere to correct assumptions for those tests.
• Understand your data’s limitations. For example, avoid inferring causality from correlation.
[1] Howell, E. (2022). New atomic clock loses only one second every 300 billion years.
Comments? Need to post a correction? Please Contact Us.
|
{"url":"https://www.statisticshowto.com/accuracy-and-precision/","timestamp":"2024-11-04T15:35:00Z","content_type":"text/html","content_length":"70132","record_id":"<urn:uuid:a118258e-13e0-40a0-a770-2b441f387f2e>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00672.warc.gz"}
|
Category:Statistical Algorithms
Jump to navigation Jump to search
Algorithms which generate statistics of a bot's past movement, and does not store a full log. This is in constrast with log-based algorithms, which store a full log of movement instead of more
processed statistics.
This category has only the following subcategory.
Pages in category "Statistical Algorithms"
This category contains only the following page.
|
{"url":"https://robowiki.net/wiki/Statistical_Algorithms","timestamp":"2024-11-08T18:06:32Z","content_type":"text/html","content_length":"15648","record_id":"<urn:uuid:7a967453-4f44-431f-afb1-055339fb5496>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00349.warc.gz"}
|
2-Dimensional Cartesian Coordinate System
This section provides an introduction of 2-dimensional Cartesian coordinate systems, which uses perpendicular projections on 2 perpendicular axes to describe any locations in the frame of reference.
When describing an object that is moving along non-straight line, we need to use 2-dimensional or 3-dimensional frame of references and coordinate systems.
For example, the trajectory of a flying golf ball is not a straight line, but it can described with a 2-dimensional frame of reference and an associated coordinate system.
First, let's define a 2-dimensional frame of reference as a vertical rectangle:
• The first edge runs from the golf ball stand to target hole.
• The second edge runs from the golf ball stand straight up to the sky for 10 meters.
• The third edge runs from the target hole straight up to the sky for 10 meters.
• The last edge connects the end point of the second edge and the first edge.
Next, let's create a simple coordinate system:
• Set a 2-dimensional Cartesian coordinate system on the frame of reference.
• Set the origin of the Cartesian coordinate system at the golf ball stand.
• Set the x-axis along the first edge of the frame of reference. Scale the x-axis with 1 meter per unit.
• Set the y-axis along the second edge of the frame of reference. Scale the x-axis with 1 meter per unit.
Now we are can describe any location of the golf ball while it's flying in the air as a pair of coordinate numbers by reading scales of its perpendicular projections on the x-axis and the y-axis.
For example, the highest location of the golf ball in the picture below can be described as (14.89, 7.93) because:
• Its perpendicular projection on the x-axis is 14.89 meters.
• Its perpendicular projection on the y-axis is 7.93 meters.
2-Dimensional Cartesian Coordinate System
Table of Contents
►Introduction of Frame of Reference
Frame of Reference with 2 Objects
►2-Dimensional Cartesian Coordinate System
3-Dimensional Cartesian Coordinate System
1 Frame of Reference with 2 Coordinate Systems
Introduction of Special Relativity
Time Dilation in Special Relativity
Length Contraction in Special Relativity
The Relativity of Simultaneity
Minkowski Spacetime and Diagrams
Introduction of Generalized Coordinates
Phase Space and Phase Portrait
|
{"url":"https://www.herongyang.com/Physics/Reference-2-Dimensional-Cartesian-Coordinate-System.html","timestamp":"2024-11-14T20:29:20Z","content_type":"text/html","content_length":"12174","record_id":"<urn:uuid:e5854dfa-31c7-47d0-a65f-17bb1696730e>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00136.warc.gz"}
|
Calculator of Cantilever beam with load at any point
Cantilever beam with load at any point
This online calculator is designed to calculate the slope and deflection of a cantilever beam for concentrated load at any point.
Slope at free end = Pa^2 / 2EI
Deflection at any section = Px^2(3a-x) / 6EI(for x less than a)
Deflection at any section = Pa^2(3x-a) / 6EI(for a less than x)
• P - the externally applied load,
• E - the elastic modulus,
• I - the area moment of inertia,
• L - the length of the beam and
• x - the position of the load
• a - the distance of the load from one end of the support
Note. This statistical calculator of a cantilever beam with load at any point is provided for your personal use and should be used as a guide only. Construction and other decisions should NOT be
based on the results of this calculator. Although this calculator has been tested, we cannot guarantee the accuracy of its calculations or results.
0 comments
Inline Feedbacks
View all comments
| Reply
|
{"url":"https://wpcalc.com/en/cantilever-beam-with-load-at-any-point/","timestamp":"2024-11-05T15:40:06Z","content_type":"text/html","content_length":"76464","record_id":"<urn:uuid:8c562c4b-3423-4f40-ab99-36910e573f23>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00492.warc.gz"}
|
Counting Unique Combinations in Pandas DataFrame - DNMTechs - Sharing and Storing Technology Knowledge
Counting Unique Combinations in Pandas DataFrame
Pandas is a powerful data manipulation library in Python that provides various functions to analyze and manipulate data. One common task in data analysis is counting unique combinations of values in
a DataFrame. In this article, we will explore different methods to achieve this using Pandas.
Understanding the Problem
Before diving into the solutions, let’s first understand the problem. Consider a DataFrame with multiple columns, each representing a different attribute. We want to count the unique combinations of
values across these columns and determine the frequency of each combination.
For example, let’s say we have a DataFrame with three columns: ‘Category’, ‘Color’, and ‘Size’. We want to count the unique combinations of values across these columns and find out how many times
each combination occurs.
Method 1: Grouping and Counting
One way to count unique combinations in a Pandas DataFrame is by using the groupby function along with the size method. We can group the DataFrame by the desired columns and then count the size of
each group.
import pandas as pd
# Create a sample DataFrame
data = {'Category': ['A', 'B', 'A', 'B', 'A'],
'Color': ['Red', 'Blue', 'Red', 'Green', 'Blue'],
'Size': ['Small', 'Large', 'Small', 'Medium', 'Medium']}
df = pd.DataFrame(data)
# Count unique combinations
unique_combinations = df.groupby(['Category', 'Color', 'Size']).size().reset_index(name='Count')
The above code will group the DataFrame by the columns ‘Category’, ‘Color’, and ‘Size’, and then count the size of each group. The result will be a new DataFrame with the unique combinations and
their corresponding counts.
Method 2: Using Value Counts
Another approach is to use the value_counts function on each column individually and then combine the results. We can achieve this by concatenating the value counts of each column using the pd.concat
import pandas as pd
# Create a sample DataFrame
data = {'Category': ['A', 'B', 'A', 'B', 'A'],
'Color': ['Red', 'Blue', 'Red', 'Green', 'Blue'],
'Size': ['Small', 'Large', 'Small', 'Medium', 'Medium']}
df = pd.DataFrame(data)
# Count unique combinations
unique_combinations = pd.concat([df[col].value_counts().reset_index().rename(columns={col: 'Count', 'index': col}) for col in df.columns], axis=1)
In this code, we iterate over each column in the DataFrame and apply the value_counts function to get the count of each unique value. We then combine the results by concatenating the DataFrames along
the columns axis.
Method 3: Using MultiIndex
If we want to preserve the hierarchical structure of the unique combinations, we can use a MultiIndex. We can achieve this by setting the desired columns as the index and then counting the
occurrences using the groupby function.
import pandas as pd
# Create a sample DataFrame
data = {'Category': ['A', 'B', 'A', 'B', 'A'],
'Color': ['Red', 'Blue', 'Red', 'Green', 'Blue'],
'Size': ['Small', 'Large', 'Small', 'Medium', 'Medium']}
df = pd.DataFrame(data)
# Count unique combinations
unique_combinations = df.groupby(['Category', 'Color', 'Size']).size().reset_index(name='Count').set_index(['Category', 'Color', 'Size'])
In this code, we set the desired columns as the index using the set_index function. Then, we group the DataFrame by the index columns and count the occurrences using the size method. The result will
be a DataFrame with a MultiIndex representing the unique combinations and their counts.
Counting unique combinations in a Pandas DataFrame is a common task in data analysis. In this article, we explored different methods to achieve this, including grouping and counting, using value
counts, and using a MultiIndex. Each method has its advantages and can be used depending on the specific requirements of the analysis.
Example 1: Counting unique combinations of two columns in a Pandas DataFrame
import pandas as pd
# Create a sample DataFrame
data = {'Column1': ['A', 'B', 'C', 'A', 'B'],
'Column2': ['X', 'Y', 'Z', 'X', 'Y']}
df = pd.DataFrame(data)
# Count unique combinations of Column1 and Column2
unique_combinations = df.groupby(['Column1', 'Column2']).size().reset_index(name='Count')
Example 2: Counting unique combinations of multiple columns in a Pandas DataFrame
import pandas as pd
# Create a sample DataFrame
data = {'Column1': ['A', 'B', 'C', 'A', 'B'],
'Column2': ['X', 'Y', 'Z', 'X', 'Y'],
'Column3': ['P', 'Q', 'R', 'P', 'Q']}
df = pd.DataFrame(data)
# Count unique combinations of Column1, Column2, and Column3
unique_combinations = df.groupby(['Column1', 'Column2', 'Column3']).size().reset_index(name='Count')
Reference Links:
1. Pandas documentation: https://pandas.pydata.org/docs/
2. Pandas GroupBy documentation: https://pandas.pydata.org/pandas-docs/stable/reference/groupby.html
Counting unique combinations in a Pandas DataFrame is a common task in data analysis. By using the groupby function in Pandas, we can easily count the occurrences of unique combinations of columns in
a DataFrame. This allows us to gain insights into the distribution and frequency of different combinations within our data. The examples provided demonstrate how to count unique combinations of two
columns and multiple columns in a DataFrame. By leveraging the power of Pandas, we can efficiently analyze and summarize our data to extract meaningful information.
|
{"url":"https://dnmtechs.com/counting-unique-combinations-in-pandas-dataframe/","timestamp":"2024-11-12T16:53:09Z","content_type":"text/html","content_length":"83235","record_id":"<urn:uuid:f36b0fd4-e1b7-4d78-9af9-39d11534f822>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00801.warc.gz"}
|
yes i posted since it’s february 29
A Difference Between Matlab and Octave
• Posted in Thesis Tales
• Tagged converting octave code to java, java, octave, octave vs. matlab, rgb2gray, the few times you'll ever see me use an emoticon, yes i posted since it's february 29
Due to the nature of the algorithms we are testing for our thesis, we had to “prototype” the procedures in Matlab so that we can easily modify parameters and test variables. However, since Matlab is
expensive and we are a university that does not tolerate piracy ;), we used GNU Octave, a FOSS equivalent of Matlab (think Mono for C#).
We are done with the algorithm-prototyping part and we are now porting our Matlab code to Java, since this thesis is meant to be used by scientists, with a GUI and all that comes with standard
software. A big part of this task is in coding the functions that are built-in in Matlab; remember that Matlab is meant specially for mathematical purposes (it is a portmanteau of Matrix Laboratory)
while Java is more general purpose, closer to metal, if you will.
For the past few days, I’ve been trying to implement the Matlab function rgb2gray which, as the name suggests, converts an RGB-colored image to grayscale. Now, there are a lot of ways to convert an
image to grayscale but getting a grayscale isn’t the main point here. Getting it the way Matlab/Octave does is essential so that we can recreate in Java the recognition accuracy we achieved in
Octave. We will be manipulating these pixel values after all.
So, I looked into Matlab’s documentation of rgb2gray and found that, for a given RGB pixel, it gets the the corresponding grayscale value by the following weighted sum:
0.2989 * R + 0.5870 * G + 0.1140 * B
(Or something close to those constants/giving the same priority over the RGB components. That is, green most weighted, followed by red, and then blue. This priority reflects the sensitivity of the
human eye to these colors. See Luma (video).)
I then ran some tests on Octave to verify the docs:
octave3.2:1> four = imread("four.JPG");
octave3.2:2> four(1,1,1) # The red component of the first pixel of four.JPG
ans = 159
octave3.2:3> four(1,1,2) # The green component of the first pixel of four.JPG
ans = 125
octave3.2:4> four(1,1,3) # The blue component of the first pixel of four.JPG
ans = 64
octave3.2:5> grayval = 0.2989 * 159 + 0.5870 * 125 + 0.1140 * 64
grayval = 128.20
So, the grayscale equivalent of the first pixel of four.JPG will have the value floor(128.20)=128. Sure enough, when I encoded the procedure in Java, the first pixel of the grayscaled four.JPG has
the value 127—close enough taking into account the possible differences in how Java and Octave handle floating point computation.
But wait, there’s more…
octave3.2:6> fourgray = rgb2gray(four);
octave3.2:7> fourgray(1,1)
ans = 116
The value of the first pixel of four.JPG after rgb2gray is 116! Now that’s something no amount of discrepancy in floating-point handling can account for. Besides, hasn’t Octave itself computed a
value close to Java’s 127 when done manually?
That’s when I realized that Octave may not be an exact port/clone of Matlab after all. I decided to Google “rgb2gray octave” and, sure enough, the documentation of Octave at SourceForge points to a
departure from Matlab’s implementation:
Function File: gray = rgb2gray (rgb)
If the input is an RGB image, the conversion to a gray image is computed as the mean value of the color channels.
And verifying the docs…
octave3.2:8> floor((159 + 125 + 64)/3)
ans = 116
Problem solved.
I’m pretty sure that this isn’t the only difference between Matlab and Octave. The next time I encounter another one, I’ll try to document it here, time permitting.
BONUS: My encounters with Octave so far gives credence to this but I have yet to verify this thoroughly. It seems that Matlab/Octave loops through matrices (arrays of at least two dimensions, in Java
/C-speak) column-major order. This isn’t exactly difficult to do in Java/C but Java/C programmers are more used to traversing multidimensional arrays in row-major order, since this should result to
less page faults and therefore faster code. Still, for some computations, the order with which you traverse a matrix matters a lot. Be careful there.
|
{"url":"http://kodeplay.skytreader.net/tag/yes-i-posted-since-its-february-29/","timestamp":"2024-11-13T08:30:59Z","content_type":"application/xhtml+xml","content_length":"31520","record_id":"<urn:uuid:3fc86323-4a91-43e5-b404-03128db980cc>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00230.warc.gz"}
|
Numpy - Count Zeros in Array with Examples - Data Science Parichay
In this tutorial, we will look at how to count zeros in a numpy array. We will also look at how to count zeros present in each row and each column of a 2d array.
How to count zeros in a numpy array?
You can use np.count_nonzero() or the np.where() functions to count zeros in a numpy array. In fact, you can use these functions to count values satisfying any given condition (for example, whether
they are zero or not, or whether they are greater than some value or not, etc).
Note that using np.count_nonzero() is simpler of the two methods. The following is the syntax to count zeros using this function –
# arr is a numpy array
# count of zeros in arr
n_zeros = np.count_nonzero(arr==0)
Let’s look at some examples of how to use the above functions. First, we will create a couple of numpy arrays that we will be using throughout this tutorial.
import numpy as np
# one-dimensional array
arr_1d = np.array([3, 0, 5, 2, 1, 0, 8, 6])
# two-dimensional array
arr_2d = np.array([[4, 3, 0],
[0, 0, 2],
[2, 5, 6]])
[3 0 5 2 1 0 8 6]
[[4 3 0]
[0 0 2]
[2 5 6]]
Now we have a one-dimensional array and a two-dimensional array for which we will be counting the zeros.
Count all zeros in the array
To count all the zeros in an array, simply use the np.count_nonzero() function checking for zeros. It returns the count of elements inside the array satisfying the condition (in this case, if it’s
zero or not).
Let’s use this function to count the zeros in arr_1d created above:
📚 Data Science Programs By Skill Level
Introductory ⭐
Intermediate ⭐⭐⭐
Advanced ⭐⭐⭐⭐⭐
🔎 Find Data Science Programs 👨💻 111,889 already enrolled
Disclaimer: Data Science Parichay is reader supported. When you purchase a course through a link on this site, we may earn a small commission at no additional cost to you. Earned commissions help
support this website and its team of writers.
# count zeros in 1d array
n_zeros = np.count_nonzero(arr_1d==0)
# display the count of zeros
We get 2 as the output since there are two zero elements in the 1d array arr_1d.
You can also use the same syntax to count zeros in higher dimensional arrays. Let’s count the number of zeros in arr_2d using np.count_nonzero()
# count zeros in 2d array
n_zeros = np.count_nonzero(arr_2d==0)
# display the count of zeros
We get 3 as the output since there are three zero value elements in the array arr_2d.
Count of zeros in each row
To count zeros in each row, pass axis=1 to the np.count_nonzero() function. Let’s count zeros in each row of arr_2d
# count zeros in each row
n_zeros = np.count_nonzero(arr_2d==0, axis=1)
# display the count of zeros
[1 2 0]
It returns a numpy array of the count of zeros in each row. You can see that we have one zero-element in the first row, two in the second row, and no such elements in the third row.
Count of zeros in each column
To count zeros in each column, pass axis=0 to the np.count_nonzero() function. Let’s count the zeros in each column of arr_2d
# count zeros in each column
n_zeros = np.count_nonzero(arr_2d==0, axis=0)
# display the count of zeros
[1 1 1]
We have one zero-element in each column of the array arr_2d.
For more on the np.count_nonzero() function, refer to its documentation.
Using np.where() to count zeros in an array
Alternatively, you can use np.where() to count the zeros in an array. np.where() is generally used to find indexes of elements satisfying a condition in a numpy array.
You can use this function to find indexes of zero-valued elements in the array and then count them to get the count of zeros in the array. Let’s count the zeros in the array arr_1d using this method:
# count zeros with np.where
result = np.where(arr_1d==0)
# show the result of np.where
# count of zeros
n_zeros = result[0].size
# display the count of zeros
(array([1, 5], dtype=int64),)
You can see that np.where() results in a tuple of numpy arrays showing the indexes satisfying the condition. We see that zeros are present at index 1 and 5 in the array arr_1. To get the count, we
use the .size attribute of this index array.
You can also use np.where() to count zeros in higher-dimensional arrays as well. For example, let’s use it to count zeros in arr_2d
# count zeros with np.where
result = np.where(arr_2==0)
# show the result of np.where
# count of zeros
n_zeros = result[0].size
# display the count of zeros
(array([0, 1, 1], dtype=int64), array([2, 0, 1], dtype=int64))
The returned value from np.where() is a tuple of two arrays, the first one shows the row indexes of elements matching the condition (element equal to zero) and the second array gives the column
indexes for those elements. Counting the indexes in any of these arrays gives the count of zeros in the array.
With this, we come to the end of this tutorial. The code examples and results presented in this tutorial have been implemented in a Jupyter Notebook with a python (version 3.8.3) kernel having numpy
version 1.18.5
Subscribe to our newsletter for more informative guides and tutorials.
We do not spam and you can opt out any time.
|
{"url":"https://datascienceparichay.com/article/numpy-count-zeros/","timestamp":"2024-11-07T18:44:44Z","content_type":"text/html","content_length":"266109","record_id":"<urn:uuid:61266e79-ad17-4117-b4df-67a8845a13e0>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00330.warc.gz"}
|
% calculation for a table card
I need to calculate the percentage in a table card, the table like this:.
I want to calculate the % based on Sent. e.g delivered % =4800/5000, Total bounce % =120/5000.
I wrote this beast mode, but return nothing, can you please let me know what is not working.
Thank you.
WHEN `Category` IN ('Delivered','Total Bounce','Opens','Unsubscribes')
THEN (`total`) END)
WHEN `Category` IN ('Sent')
THEN (`total` )
Category total %
Sent 5000
Devliered 4800 96%
Total bounce 120 2.4%
Opens 900 18%
Unsubscribes 24 0%
• I believe that Beast Modes can only perform calculations on rows, which means we may not be able to display your data in a vertical table perfectly.
Depending on how your data set is structured you have a few options... See the attached PDF for scenarios I have drawn up for you.
• Hello, DDalt,
thank you for your help. My data is stored in option 1 in your pdf. Unfortunately, what I am showing is what my stakeholders would like to have. I hope I can do it in the way they want. Any
other ideas?
Thank you.
• Sure! So in this case, we will need to add a column to our original data that stores the Total Sent value next to each value. This will enable us to perform a calculation on each row where we can
divide the metric value by the Total Sent value.
I've attached another PDF of how I did this using a Redshift transform and how you can build your table in Analyzer using beast mode.
• Hi, DDalt,
Thank you for your help. I did as what's in your data flow. However, there is another issue now: In the data flow, the total is for all 'Sent' for all data I have, however, in my card, I only
wanted to show for a certain time period, even better, I would like to give user the ability to select the date range, thus the total sent is changing based on the filters. Therefore, the % is
not correct.
Any other ideas?
Thank you very much.
• Hey Olivia,
I updated my data set to include a date column and am attaching an example of a window function you can use to select the maximum value for each day (which should be the "Sent" value") and
ascribe it to each row within that day.
If you send multiple emails per day, you could consider partioning by 'email_name' instead of date which would give you the largest value for each email sent
This discussion has been closed.
• 1.8K Product Ideas
• 1.5K Connect
• 2.9K Transform
• 3.8K Visualize
• 678 Automate
• 34 Predict
• 394 Distribute
• 121 Manage
• 5.4K Community Forums
|
{"url":"https://community-forums.domo.com/main/discussion/comment/37803","timestamp":"2024-11-11T10:14:19Z","content_type":"text/html","content_length":"390233","record_id":"<urn:uuid:ad96c823-a72e-463b-9855-5dd49d6d6e06>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00611.warc.gz"}
|
White Dwarfs & Neutron Stars - The White Dwarf - allrite
Not all the electrons in the core remnant are degenerate to begin with. As has been noted, the surface of the white dwarf is still very hot, giving the surface electrons enough energy to be in higher
energy states. However, these electrons radiate energy, and with no thermonuclear reactions to replenish it, fall into lower energy states. Once an electron is in the lowest energy state possible it
can no longer radiate energy. The white dwarf’s luminosity hence decreases with time, until it becomes cold and dark.
There is a maximum limiting mass for the existence of a white dwarf, and this was calculated mathematically by Subrahmanyan Chandrasekhar. To follow the calculation, first consider the equations for
hydrostatic equilibrium.
and to close the system of equations, let
Combining (16) and (17), we get
Defining a dimensionless radius and density:
Equation (18) and the boundary conditions become
This is called the polytropic equation of index n, and by looking at the first zero of
where A, and B are constants containing the masses and the ionisation of the particles concerned. It can be seen that these are polytropes. By substituting equation (21) into the polytrope and
numerically solving it, a maximum mass of 1.4 solar masses is found.
|
{"url":"https://allrite.au/science/dead_stars/dead_stars5/","timestamp":"2024-11-12T12:47:53Z","content_type":"text/html","content_length":"66666","record_id":"<urn:uuid:99e564f9-c95b-4286-851b-8c9c0e28dd1f>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00588.warc.gz"}
|
Set Theory and its Interactions
Organizadores: Carlos Di Prisco (ca.di@uniandes.edu.co), Christina Brech (brech@ime.usp.br)
• Monday 13
15:00 - 15:45
Hereditary interval algebras and cardinal characteristics of the continuum
Carlos Martinez-Ranero (Universidad de Concepción, Chile)
An interval algebra is a Boolean algebra which is isomorphic to the algebra of finite unions of half-open intervals, of a linearly ordered set. An interval algebra is hereditary if every
subalgebra is an interval algebra. We answer a question of M. Bekkali and S. Todorcevic, by showing that it is consistent that every \(\sigma\)-centered interval algebra of size \(\mathfrak{b}\)
is hereditary. We also show that there is, in ZFC, an hereditary interval algebra of cardinality \(\aleph_1\).
15:45 - 16:30
Preservation of some covering properties by elementary submodels
Lucia Junqueira (Universidade de São Paulo, Brasil) joint with Robson A. Figueiredo and Rodrigo R. Carvalho
Given a topological space \((X, \tau)\) and an elementary submodel \(M\), we can define the topological space \(X_M = (X\cap M, \tau _M)\), where \(\tau _M\) is the topology on \(X \cap M\)
generated by \(\{ V\cap M : V \in \tau \cap M \}\). It is natural to ask which topological properties are preserved by this new operation. For instance, if \(X\) is \(T_2\), then \(X_M\) is also
\(T_2\). On the other hand, \(X_M\) compact implies \(X\) compact. A systematic study of it was initiated by L. Junqueira and F. Tall in 1998.
In the paper ``More reflection in topology'', published in Fudamenta Mathematicae in 2003, F. Tall and L. Junqueira, studied the reflection of compactness and, more specifically, when can we
have, for \(X\) compact, \(X_M\) compact non trivially, {\it i.e.}, with \(X \neq X_M\). It is natural to try to extend this study for other covering properties.
We will present some results concerning the preservation of Lindelöfness. We will also discuss the perservation of some of its strengthenings, like the Menger and Rothberger properties.
16:45 - 17:30
Group operations and universal minimal flows
Dana Bartosova (University of Florida, Estados Unidos)
Every topological group admits a unique, up to isomorphism, universal minimal that maps onto every minimal (with respect to inclusion) flow. We study interactions between group operations and
corresponding universal minimal flows.
17:30 - 18:15
Some lessons after the formalization of the ctm approach to forcing
Pedro Sánchez Terraf (Universidad Nacional de Córdoba, Argentina) joint with Emmanuel Gunther, Miguel Pagano, and Matías Steinberg
In this talk we'll discuss some highlights of our computer-verified proof of the construction, given a countable transitive set model \(M\) of \(\mathit{ZFC}\), of a generic extension \(M[G]\)
satisfying \(\mathit{ZFC}+\neg\mathit{CH}\). In particular, we isolated a set \(\Delta\) of \(\sim\)220 instances of the axiom schemes of Separation and Replacement and a function \(F\) such that
such that for any finite fragment \(\Phi\subseteq\mathit{ZFC}\), \(F(\Phi)\subseteq\mathit{ZFC}\) is also finite and if \(M\models F(\Phi) + \Delta\) then \(M[G]\models \Phi + \neg \mathit{CH}\).
We also obtained the formulas yielded by the Forcing Definability Theorem explicitly.
To achieve this, we worked in the proof assistant Isabelle, basing our development on the theory Isabelle/ZF by L. Paulson and others.
The vantage point of the talk will be that of a mathematician but elements from the computer science perspective will be present. Perhaps some myths regarding what can effectively be done using
proof assistants/checkers will be dispelled.
We'll also compare our formalization with the recent one by Jesse M. Han and Floris van Doorn in the proof assistant Lean.
• Tuesday 14
15:00 - 15:45
On non-classical models of ZFC
Giorgio Venturi (Universidade Estadual de Campinas, Brasil), joint work with Sourav Tarafder and Santiago Jockwich
In this talk we present recent developments in the study of non-classical models of ZFC.
We will show that there are algebras that are neither Boolean, nor Heyting, but that still give rise to models of ZFC. This result is obtained by using an algebra-valued construction similar to
that of the Boolean-valued models. Specifically we will show the following theorem.
There is an algebra \(\mathbb{A}\), whose underlying logic is neither classical, nor intuitionistic such that \(\mathbf{V}^{\mathbb{A}} \vDash\) ZFC. Moreover, there are formulas in the pure
language of set theory such that \(\mathbf{V}^{\mathbb{A}} \vDash \varphi \land \neg \varphi\).
The above result is obtained by a suitable modification of the interpretation of equality and belongingness, which are classical equivalent to the standard ones, used in Boolean-valued
Towards the end of the talk we will present an application of these constructions, showing the independence of CH from non-classical set theories, together with a general preservation theorem of
independence from the classical to the non-classical case.
15:45 - 16:30
The Katetov order on MAD families
Osvaldo Guzmán (Universidad Nacional Autónoma de México, México)
The Katetov order is a powerful tool for studying ideals on countable sets. It is specially interesting when restricted to the class of ideals generated by MAD families. One of the reasons we are
interested in it is because it allows us to study the destructibility of MAD families under certain forcing extensions. In this talk, I will survey the main known results regarding the Katetov
order on MAD families and state some open problems.
16:45 - 17:30
Around (*)
David Asperó (University of East Anglia, Inglaterra)
In this talk I will present work motivated by the derivation of the \(\mathbb P_{max}\) axiom \((*)\) from Martin's Maximum\(^{++}\).
17:30 - 18:15
Groups definable in partial differential fields with an automorphism
Samaria Montenegro (Universidad de Costa Rica, Costa Rica), joint work with Ronald Bustamente Medina and Zoé Chatzidakis
Model theory is a branch of mathematical logic with strong interactions with other branches of mathematics, including algebra, geometry and number theory.
In this talk we are interested in differential and difference fields from the model-theoretic point of view.
A differential field is a field with a set of commuting derivations and a difference-differential field is a differential field equipped with an automorphism which commutes with the derivations.
The model theoretic study of differential fields with one derivation, in characteristic \(0\) started with the work of Abraham Robinson and of Lenore Blum. For several commuting derivations,
Tracey McGrail showed that the theory of differential fields of characteristic zero with \(m\) commuting derivations has a model companion called \(DCF\). This theory is complete, \(\omega\)
-stable and eliminates quantifiers and imaginaries.
In the case of difference-differential fields, Ronald Bustamante Medina (for the case of one derivation) and Omar León Sánchez (for the general case) showed that the theory of
difference-differential fields with \(m\) derivations admits a model companion called \(DCF_mA\). This theory is model-complete, supersimple and eliminates imaginaries.
Cassidy studied definable groups in models of \(DCF\), in particular she studied Zariski dense definable subgroups of simple algebraic groups and showed that they are isomorphic to the rational
points of an algebraic group over some definable field.
In this talk we study groups definable in models of \(DCF_mA\), and show an analogue of Phyllis Cassidy's result.
|
{"url":"https://clam2021.cmat.edu.uy/sesiones/28","timestamp":"2024-11-03T13:40:30Z","content_type":"text/html","content_length":"25788","record_id":"<urn:uuid:a55a51f2-d00b-427a-8411-67ba7b60cd5e>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00442.warc.gz"}
|
Null- and Positivstellensätze for rationally resolvable ideals
Hilbert's Nullstellensatz characterizes polynomials that vanish on the vanishing set of an ideal in C[X_]. In the free algebra C<X_> the vanishing set of a two-sided ideal I is defined in a
dimension-free way using images in finite-dimensional representations of C<X_>/I. In this article Nullstellensätze for a simple but important class of ideals in the free algebra – called tentatively
rationally resolvable here – are presented. An ideal is rationally resolvable if its defining relations can be eliminated by expressing some of the X_ variables using noncommutative rational
functions in the remaining variables. Whether such an ideal I satisfies the Nullstellensatz is intimately related to embeddability of C<X_>/I into (free) skew fields. These notions are also extended
to free algebras with involution. For instance, it is proved that a polynomial vanishes on all tuples of spherical isometries iff it is a member of the two-sided ideal I generated by 1−∑[j]X[j]^⊺X
[j]. This is then applied to free real algebraic geometry: polynomials positive semidefinite on spherical isometries are sums of Hermitian squares modulo I. Similar results are obtained for nc
unitary groups.
• Division ring
• Free algebra
• Free analysis
• Nullstellensatz
• Positivstellensatz
• Rational identity
• Real algebraic geometry
• Skew field
• Spherical isometry
• nc unitary group
ASJC Scopus subject areas
• Algebra and Number Theory
• Numerical Analysis
• Geometry and Topology
• Discrete Mathematics and Combinatorics
Dive into the research topics of 'Null- and Positivstellensätze for rationally resolvable ideals'. Together they form a unique fingerprint.
|
{"url":"https://cris.bgu.ac.il/en/publications/null-and-positivstellens%C3%A4tze-for-rationally-resolvable-ideals","timestamp":"2024-11-05T19:29:35Z","content_type":"text/html","content_length":"58782","record_id":"<urn:uuid:522ab73b-b7ec-44b3-922c-bafce4899652>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00842.warc.gz"}
|
Train Wheel Condition Monitoring via Cepstral Analysis of Axle Box Accelerations
German Aerospace Center (DLR), Institute of Transportation Systems, Rutherfordstr. 2, 12489 Berlin, Germany
German Aerospace Center (DLR), Institute of Transportation Systems, Lilienthalplatz 7, 38108 Braunschweig, Germany
Author to whom correspondence should be addressed.
Submission received: 31 December 2020 / Revised: 1 February 2021 / Accepted: 1 February 2021 / Published: 5 February 2021
Featured Application
Online wheel condition monitoring for condition based and predictive maintenance.
Continuous wheel condition monitoring is indispensable for the early detection of wheel defects. In this paper, we provide an approach based on cepstral analysis of axle-box accelerations (ABA). It
is applied to the data in the spatial domain, which is why we introduce a new data representation called navewumber domain. In this domain, the wheel circumference and hence the wear of the wheel can
be monitored. Furthermore, the amplitudes of peaks in the navewumber domain indicate the severity of possible wheel defects. We demonstrate our approach on simple synthetic data and real data
gathered with an on-board multi-sensor system. The speed information obtained from fusing global navigation satellite system (GNSS) and inertial measurement unit (IMU) data is used to transform the
data from time to space. The data acquisition was performed with a measurement train under normal operating conditions in the mainline railway network of Austria. We can show that our approach
provides robust features that can be used for on-board wheel condition monitoring. Therefore, it enables further advances in the field of condition based and predictive maintenance of railway wheels.
1. Introduction
The condition of train wheels has an impact on the passengers’ comfort, the rolling noise generation and the deterioration of railway infrastructure and, hence, the safety of railway operations.
Severe wheel defects cause high dynamical load, which can damage the railway track and shorten the life span of railway bridges [
]. There are several types of wheel defects and wear mechanisms. An overview of different wheel tread irregularities is given in [
]. Prominent examples are isolated wheel flats, polygonal wheels, corrugation, spalling, shelling and roughness. All these defects have different amplitudes and wavelengths but all increase the
dynamic wheel–rail contact forces. Additionally, permanent wheel wear leads to a constant reduction of the wheel diameter that amount to several centimeters over the wheel’s life span. In several
studies, the effects of wheel defects on the dynamic wheel–rail interaction have been investigated, e.g., Bian et al. [
] used a finite element model to analyze the impact induced by a wheel flat. Bogacz and Frischmuth [
] studied the rolling motion of a polygonalized railway wheel and in [
], a method for the computation of the wheel–rail surface degradation in a curve is explained. Casanueva et al. [
] address the issues of model complexity, accuracy and the input needed for wheel and track damage prediction using vehicle–track dynamic simulations.
The traditional maintenance strategies of wheelsets are based on the removal of the wheelset at given intervals. However, from a safety, environmental and economic point of view, the early detection
of wheel defects is important. Wayside monitoring systems are commonly used to detect faulty wheelsets in service. Mosleh et al. [
] investigated an envelope spectral analysis approach to detect wheel flats with wayside sensors using a range of 3D simulations based on a train–track interaction model. In contrast to wayside
systems, on-board monitoring systems have traditionally been focused on the detection of track defects [
] but are more and more considered for vehicle monitoring [
]. The advantage of on-board monitoring systems is that the wheel is monitored continuously and not only when the vehicle passes a track side monitoring site. This allows for the timely detection of
emerging wheel defects [
]. Furthermore, if the on-board monitoring system provides positioning, the occurrence of a wheel defect can be linked to a position on the track. The track at this position can then be inspected and
in case a track defect is identified, appropriate maintenance actions can be issued to avoid further damage to the rail and other passing vehicles.
Previous studies have also shown that train-borne accelerometers can be used to monitor the wheel. In [
] a methodology is proposed to monitor the wheel diameter by means of onboard vibration measurements. Jia and Dhanasekar [
] used wavelet approaches for monitoring rail wheel flat damage from bogie vertical acceleration signatures. Several methods based on the analysis of axle-box acceleration (ABA) have been proposed.
Bosso et al. [
] used vertical ABA to detect wheel flat defects. In [
], a data driven approach to estimate the length of a wheel flat is proposed. Bai et al. [
] presented a frequency-domain image classification method to analyze wheel flats.
In general, a trend can be noticed in sensor data analysis and condition monitoring towards analysis techniques based on data driven machine learning approaches. They can be used to find patterns,
i.e., clusters and for outlier and novelty detection in an unsupervised manner. Furthermore, supervised machine learning can be used to predict class memberships and their probabilities and to
estimate relationships between independent variables, e.g., health status indices, and features extracted from the data. These methodologies are quite powerful. They can approximate complex and
non-linear relationships and need little or no a priori information. Therefore, they are often preferred over model-based approaches. However, since machine learning techniques make use of the
underlying statistics in the data, they rely on the fact that sufficient data are available. In addition, supervised machine learning approaches need sufficient labeled data for training. A strong
focus on machine learning techniques bears the risk that powerful traditional signal processing techniques are overseen, even when they might be the right choice for a specific data analysis problem.
One example of such a powerful but uncommonly used tool is the cepstrum analysis. It dates back to the 1960s and 1970s, when it was introduced to analyze, e.g., echoes and reverberations in radar,
sonar, speech and seismic data. For a detailed overview see [
] and the references therein. The power cepstrum was first defined by Bogert et al. [
] as the power spectrum of the logarithm of the power spectrum of a signal. In contrast to the complex cepstrum it does not consider any phase information and therefore only involves the logarithm of
real, positive numbers. Bracciali and Cascini [
] used cepstrum analysis of rail acceleration to detect wheel flats of railway wheels. They identified the power cepstrum as the best instrument to reveal periodic acceleration peaks as those excited
by wheel flats. Here, we adapt this methodology to the analysis of ABA data for wheel condition monitoring. Specifically, and in analogy to the estimation of the arrival times of echoes and their
relative amplitudes, we use the power cepstrum to estimate the wheel circumference and the relative contribution of the wheel imperfections to the excited wheel vibrations.
The main contribution of this research is to introduce a simple, robust and yet precise methodology to extract wheel wear related features from the ABA signals without relying on a priori knowledge
or training data. This methodology relies on the processing of ABA data in the distance domain. That is, the ABA time series need to be transformed using speed information so as to obtain spatial
acceleration signatures. Hence, the vehicle speed must be estimated using further onboard sensors, namely a global navigation satellite system (GNSS) receiver [
] and an inertial measurement unit (IMU) [
]. The estimation relies on Kalman filter methods [
The remainder of the paper is structured as follows.
Section 2
provides the theoretical background by reviewing the calculation of the power cepstrum and introducing the term navewumber. In
Section 3
, the sensitivity of the cepstral analysis to different wheel surface defects is tested by means of simple synthetic data. In
Section 4
, experimental data are used to test the performance of the cepstrum under real-world conditions. Data pre-processing and vehicle speed estimation is also explained in this section. In
Section 5
, the results are discussed and
Section 6
provides a final conclusion.
2. Cepstral Analysis and Navewumber Domain
The power cepstrum
$C p ( x ˜ )$
can be defined as the power spectrum of the logarithm of the power spectrum of a function
$f ( x )$
$C f ( x ˜ ) = | F − 1 { log ( | F { f ( x ) } | 2 ) } | 2 .$
Here, the Fourier transform
$F { · }$
is calculated by using the fast Fourier transform (FFT) algorithm. The application of the forward FFT or the inverse FFT to the logarithm of the power spectrum in Equation (1) provides the same
result apart from a scaling factor. The independent variable
$x ˜$
has the same dimension as
and is originally called quefrency to indicate that it is the inverse of the frequency, which is the case for the cepstrum of a time series. Accordingly, the independent variable of the cepstrum of
data in the spatial domain is dimensionally a distance. Following the nomenclature introduced by Borgert et al. [
], we call the independent variable of the cepstrum navewumber when its dimension is a distance, since in this case it is the inverse of the wavenumber. Furthermore, we call the cepstral domain
navewumber domain. In the following the term cepstrum is always used for the power cepstrum.
The cepstrum can be used to convert a convolution into the addition of the individual components and thus a complicated deconvolution procedure can be performed by simply subtracting the undesired
components in the cepstral domain. This procedure is called homomorphic deconvolution [
]. According to Equation (1), the cepstrum of a continuous function
$y ( x )$
consisting of two components
$s ( x )$
$r ( x )$
coupled via convolution
$[ * ]$
can be calculated by the following sequence of mathematical operations:
• The power spectrum
$| Y ( k ) | 2$
of the function
$y ( x ) = s ( x ) ∗ r ( x ) ,$
is calculated:
$| Y ( k ) | 2 = | S ( k ) | 2 · | R ( k ) | 2 ,$
so that the convolution transforms to a simple multiplication in the Fourier domain.
• The logarithm of the power spectrum is taken:
$log ( | Y ( k ) | 2 ) = log ( | S ( k ) | 2 ) + log ( | R ( k ) | 2 ) .$
Thus, the components are coupled by addition.
• Applying the inverse Fourier transform to the logarithmic power spectrum
$log ( | Y ( k ) | 2 )$
and finally squaring the results yield the cepstrum of
$y ( x )$
$C y ( x ˜ ) = C s ( x ˜ ) + C r ( x ˜ ) + cross-product term .$
Due to the linearity of the Fourier transform, the components in the cepstrum are still coupled by addition. The final squaring operation produces a cross product term. However, if the cepstra of
occupy different navewumber ranges, this term can be omitted [
] and Equation (5) becomes
$C y ( x ˜ ) = C s ( x ˜ ) + C r ( x ˜ ) .$
The cross-terms can also be avoided if the final squaring operation is not included in Equation (1).
Periodic components of the logarithmic power spectrum are reduced to series of spikes (Dirac delta functions) in the cepstral domain. Ulrych [
] found out that if the spectrum of a signal is smooth it maps around the origin in the navewumber domain, while the cepstrum of a periodic impulse sequence, as excited by a wheel irregularity, is
also an impulse sequence with the same period. This means that in the cepstrum of ABA data, the component of a wheel defect can simply be separated from the component of the transmission path, i.e.,
the cepstra of the impulse responses of the track, wheel and sensor. Hence, bypassing the transmission path makes trend analyses robust towards changes of the mechanical structures of the components,
which influence the signal transmission. This property makes the cepstrum a promising tool for wheel monitoring.
3. Synthetic Data Analysis
Simple synthetic data are used to investigate the sensitivity of the cepstrum to the severity and type of different wheel surface defects and to rail roughness.
3.1. Synthetic Models
The wheel–rail contact vibrations are excited by the unevenness of the wheel and rail. Here, we employ a “moving irregularity model”, where the wheel is static and the wheel–rail surface is pulled
between the wheel and rail [
]. Since the cepstrum analysis allows to bypass the transmission path, only the excitation signal is considered in the synthetic data. The impulse response of the resulting relative displacement can
then be written as:
$y ( x ) = u w ( x ) + u r ( x ) ,$
$u w$
$u r$
are the surface profile functions of the wheel and rail, respectively, with respect to the distance
covered by the train. The wheel profile function is periodic with a period of
$L = 2 π R w$
, where
$R w$
is the wheel radius. Thus, it can be expressed by a convolution of the wheel surface profile
$p w$
with an impulse train constructed from Dirac delta functions:
$u w ( x ) = p w ( x ) ∗ ∑ n = 0 n = N δ ( x − n L ) ,$
the number of wheel rotations. The cepstrum of a periodic series of delta functions with a period
is also a periodic delta function with the period of
. Therefore, according to Equations (2)–(6), the cepstrum of
$u w ( x )$
can be written as:
$C { u w ( x ) } ( x ˜ ) = C { p w ( x ) } ( x ˜ ) + ∑ n = 0 n = N δ ( x ˜ − n L ) ,$
The particular contribution of the left and right term of the cepstrum in Equation (9) depends on the shape of the wheel surface profile. The smoother the wheel the higher the contribution of the
left term. From Equations (8) and (9) one can see that if $p w$ is a spike with a certain amplitude, the cepstrum reduces to a series of spikes with the same amplitude. In contrast, if $p w$ is a
sinusoid, as one would expect of a perfectly periodically polygonized wheel, the right term in Equation (9) would vanish. In general, the cepstrum provides a measure of the periodicity, namely the
wheel circumference, and the repeatability of the ABA signal.
In the following we consider different models with a length of 15 m (
$L = 3$
$N = 5$
, see
Figure 1
). The models represent a wheel with a flat spot of different length and depth. The vertical profile of the wheel flat is modelled as:
$p w ( x ) = { 0 , 0 ≤ x < L / 2 − l / 2 , − d 2 ( 1 − cos 2 π x l ) , L / 2 − l / 2 ≤ x ≤ L / 2 + l / 2 , 0 , L / 2 + l / 2 < x ≤ L ,$
are the wheel flat length and depth, respectively. Additionally, wheel and track roughness are modeled as a Gaussian random signal with varying standard deviation (std). The parameters of the
different models can be found in
Table 1
3.2. Cepstrum Analysis of Synthetic Data
A peak at a navewumber of 3 m can be noticed in the cepstra of all models (
Figure 2
). It indicates the wheel circumference. Comparing the cepstra of the first three models shows that with increasing wheel-flat depth the amplitude of this peak increases, while the length of the
wheel flat has the opposite effect. A longer wheel flat is smoother or less spiky and the resulting harmonics in the spectrum are weaker, which leads to weaker peaks in the cepstrum. The cepstra of
models 4 and 5 indicate that an increase in wheel roughness leads to an increase of the peak amplitude. However, comparing the cepstra of models 5 and 6 suggests that decreasing the amplitude of the
rail roughness has a similar effect on the peak amplitude than increasing the amplitude of the wheel roughness. This can be explained by applying Equations (2)–(6) to Equation (7):
$C { y ( x ) } ( x ˜ ) = C { u r ( x ) } ( x ˜ ) + { | F − 1 { log ( | U w ( k ) U r ( k ) | 2 + 1 ) } | 2 } ( x ˜ ) .$
The second term in Equation (11) shows that the power spectrum of the wheel profile is scaled by the power spectrum of the rail profile before the inverse Fourier transform is taken. Thus, the
amplitude of the peak in the cepstrum at the wheel circumference can be interpreted as the relation between the contributions of the wheel and track to the relative displacement and hence to the
dynamic wheel–rail interaction. If the rail roughness is zero, Equation (11) reduces to Equation (9) and the amplitudes of the wheel irregularities have no influence on the amplitude of the peaks at
the wheel circumference.
4. Experimental Data
ABA data acquired during a measurement campaign in Austria were analyzed to investigate the performance of the cepstrum algorithm at normal operating conditions.
4.1. Data Acquisition
The data have been acquired with a prototype of a multi-sensor system developed at the German Aerospace Center (DLR, Institute of Transportation Systems, Braunschweig, Germany). The system comprises
a GNSS receiver with external antenna, an inertial measurement unit and an analogue-to-digital converter with a three-axial analogue ABA sensor. The data acquisition and processing has been
implemented in Robot Operating System (ROS). ABA data were recorded at a sampling rate of 20 kHz.
The system was installed on a measurement car of the Österreichische Bundesbahnen (ÖBB, Vienna, Austria,
Figure 3
). Data were gathered throughout a two-weeks measurement campaign in June 2019. During this campaign the measurement train was travelling at normal operating speed of up to 200 km/h.
4.2. Speed Estimation and Data Pre-Processing
Speed information is essential in the presented ABA data analysis methodology and used to transform ABA time series into functions of a scalar along-track distance.
Speed information can be obtained from the GNSS and IMU signals. Both exhibit characteristic errors. GNSS reception is compromised in areas where the satellite sight is obstructed with, e.g.,
buildings or trees. Tunnels result in temporary signal outages. MEMS-IMU show slowly drifting bias errors. These shortcomings were accommodated in a sensor fusion framework with additional
pre-processing steps to nevertheless obtain accurate speed information at a constant 100 Hz rate.
A simple Kalman filter (KF) [
] scheme was employed to fuse the speed information provided by the GNSS with the longitudinal IMU acceleration. Typical GNSS rates are around 1 Hz. Combination with the IMU data at around 100 Hz in
a KF yields constant 100 Hz speed rates even during GNSS outages. The signal characteristics of both GNSS and IMU data depend on the vehicle state of motion. Therefore, a parallel motion and
standstill detection scheme was implemented. For instance, vehicle standstill results in low instantaneous power of the IMU acceleration signals and can be detected accordingly. A viable option is to
run a KF with a state vector comprising the scalar along-track velocity, acceleration and acceleration bias. Depending on the state of motion different KF time and measurement updates are performed
iteratively. For instance, in standstill the speed and acceleration are known to be zero. Therefore, the bias can be observed in the noisy acceleration data. In motion one observes the sum of the
acceleration and the bias. GNSS outlier removal can be performed by only using speed measurements with enough satellites in view. In addition to the state vector, the KF provides uncertainty
information in the form of an estimation error covariance. The KF results were further improved offline in the following way: From the motion detection results the data can be divided into single
sequences of motion, which we call journeys. For each journey the vehicle does not change direction. Hence, its velocity does not change its sign. With a Rauch–Tung–Striebel (RTS) smoother [
] the KF results of the individual journeys can be re-processed using an iteration that resembles a KF run backwards in time. The RTS iterations result in smoother state estimates and are especially
beneficial in the presence of GNSS gaps.
The speed information is then used to separate the ABA time series into different journeys with a defined minimal speed. Furthermore, the speed is used to convert the data from time to distance
domain. Subsequently, the data are resampled to an equidistant interval of 0.001 m. Notice, that at lower speed the data might be down-sampled, so that an anti-aliasing filter needs to be applied
before resampling.
Figure 4
depicts the track layout and
Figure 5
shows the raw ABA together with the enhanced speed data from an 11 km journey with speeds between 10 m/s and 28 m/s.
4.3. Cepstrum Analysis of Experimental Data
The cepstrum analysis is performed for the data represented in
Figure 5
. First, the cepstrum was calculated for the whole length of the data (11,100 m). The cepstrum shows a distinct peak at a navewumber of 3 m that corresponds to the wheel circumference (
Figure 6
In order to investigate the changes of the cepstrum along the train journey, the data was divided into segments of a certain length using a Hann window with 50% overlap. Then the cepstrum was
calculated for each window and the peak position and amplitude determined between navewumbers of 2 m and 4 m. Different window sizes were tested. The results are shown in
Table 2
. For each test the median peak position across all windows was computed. Then, the percentage of windows in which the peak occurred in a range of 0.01 m around the median peak position and the mean
amplitude of those peaks were determined. In
Table 2
, we refer to these peaks close to the median position as “correctly” depicted peaks. It can be seen that the median peak position is constant for all tests. The percentage of windows in which the
peak position was close to the median position is very similar for windows larger than 20 m. It can be assumed that smaller windows are more affected by track singularities, which might mask the
cepstral response of the wheel. The mean amplitude of the peaks of the shortest windows might be affected for the same reason.
The position of the cepstrum peaks and their amplitudes for segments of 40 m length are shown in
Figure 7
. The position of the cepstrum peak could be precisely recovered apart from a few locations. Especially at the beginning and the end of the journey, where the train speed is low, the algorithm
struggles to find the right peak location. Therefore, we assume that a minimum speed of 15 m/s is necessary to provide reliable results.
5. Discussion
5.1. Wheel Condition Monitoring with ABA Sensors
Wheel monitoring with train-borne sensors means that each wagon needs to be equipped with sensors. In contrast, wayside measurement systems are able to measure conditions of all wheelsets of each
passing train with one measurement system. However, on-board systems provide a quasi-continuous monitoring of the wagon, while wayside systems only provide data in certain time intervals, when the
train passes. Another advantage of train-borne measurement systems is that they can be used to monitor the track as well, which could justify the high number of sensors. Using broad-band
accelerometers below the suspension, in contrast to sensors installed at the bogie or car body, allows to monitor the wheel diameter with very high resolution, which is beneficial for wheel-wear
trend analysis. A comprehensive cost-benefit study of available wheel monitoring systems for condition-based and predictive maintenance was not part of this work but should be dealt with in future
5.2. Navewumber Analysis
The navewumber analysis provides a robust tool to extract the wheel circumference from the ABA data. It can be recovered with a resolution equal to the distance between two measurement points, which
depends on the train speed and sampling frequency. At a speed of 20 m/s and sampling frequency of 20 kHz the cepstral resolution is 0.001. We could show that between 80 and 90 percent of the
calculated wheel circumferences were within a range of two millimeters along the whole train journey.
The high resolution and accuracy allow precise monitoring of the wheel diameter and therefore enable wheel-wear trend analysis. The approach provides reliable results under varying operational
conditions. We found out that a minimal train speed of 15 m/s is sufficient to allow the estimation of the wheel circumference. Above this speed, the navewumber analysis is not affected by the train
speed but its accuracy directly depends on the accuracy of the speed estimation that is used to transform the data from the time to distance domain. Thus, short-term variations of the estimated wheel
diameter most likely relate to speed estimation errors. These Gaussian distributed errors can be estimated by the positioning algorithms introduced in
Section 4.2
It is especially noteworthy that the track condition only has a minor effect on the estimation of the wheel circumference. Even at track segments, where high wheel vibrations were excited, the wheel
circumference could be determined accurately.
Due to the conicity of the wheels, the gauge and curve radius might have an influence on the estimated wheel circumference. However, the data analyzed in this study were not affected by the track
geometry. Furthermore, the wear-related reduction of the wheel diameter is monotonic, while track-geometry changes only occur at certain track segments. Therefore, trend analysis of the wheel wear is
not influenced by the track geometry. The influence of the track conditions can be further reduced by averaging over several track segments or taking longer track segments into account.
The synthetic data analysis has shown that the absolute amplitude of the cepstral peak can be similarly influenced by the track as well as wheel conditions. A mildly defected wheel running on a track
with low roughness can produce a peak similar to that produced by a more severely defected wheel on a track with higher roughness. More generally, the amplitudes of the peaks in the navewumber domain
provide a measure of the repeatability of the ABA signal and hence a measure of the relative contribution of the wheel condition to the dynamic wheel rail interaction.
The discrimination of different wheel defects using cepstral peak position and amplitude alone seems to be impracticable. Nevertheless, the cepstral features might complement other data driven
approaches for wheel defect diagnosis.
It should be noticed that the measurement campaign, which builds the data base for this study did not focus on wheel monitoring but rather on testing the performance of the multi-sensor system.
Therefore, no ground truth on the actual condition of the wheel exists. Nevertheless, no severe wheel defects were observed during operation. Hence, further tests including ground truth measurements
and calibration by means of direct wheel profile measurements are required to determine thresholds for wheel defect detection. In principal, the methodology provided here, could be similarly used for
the on-board detection of bearing defects, which should be subject of future studies.
6. Conclusions
The early detection of wheel defects is an important asset management and maintenance task. Wheel vibrations are excited by the imperfections of the wheel and rail and hence contain information on
the health status of both assets. These vibrations can be measured by means of ABA sensors.
In this paper, a promising methodology, the cepstrum analysis, was tested for the application to wheel condition monitoring. The cepstrum is powerful in revealing periodic signals and separating them
from the rest of the signal. This makes the cepstrum particularly interesting for wheel monitoring. The vibration signal excited by the wheel profile is periodic with respect to the wheel
circumference. This periodicity changes with the rotational frequency of the wheel and hence depends on the speed of the train. This dependency can be compensated for by transforming the ABA data
from the time to the spatial domain. To accomplish that, accurate speed information is necessary, which was obtained by fusing IMU and (E)GNSS data. The cepstral analysis was then performed in the
spatial domain. The obtained cepstrum itself is then in a spatial domain, which we called navewumber domain to highlight the inverse relation to the wavenumber. The periodicity of the ABA signal
excited by the wheel is then represented by a peak in the cepstrum at a wavenumber equal to the wheel circumference. The position of the peak precisely indicates the wheel circumference and can
therefore be used to monitor the wheel diameter. The amplitude of this peak provides a measure of the relative contribution of the wheel to the combined wheel–rail roughness. The cepstral wheel
monitoring approach presented here requires neither extensive hyperparameter testing nor training data. Ground truth data might be used to define hard thresholds for the detection of certain track
defects. However, we think that it is more reasonable and also sufficient to monitor changes in the cepstral features and thus detect deviations from the normal behavior that can be related to wheel
Through the analysis of experimental data, we could show that the cepstrum analysis is robust under varying operating conditions.
From these findings we conclude that cepstral analysis of ABA data is a powerful methodology to monitor the wear-related reduction of the wheel circumference and to detect and monitor the evolution
of wheel surface defects.
Author Contributions
Conceptualization, B.B.; methodology, B.B.; software, B.B., J.H., T.N. and M.R.; validation, B.B., J.H. and T.N.; formal analysis, T.N.; investigation, B.B.; data curation, J.H.; writing—original
draft preparation, B.B.; writing—review and editing, J.H. and M.R.; visualization, B.B.; supervision, B.B.; project administration, M.R. All authors have read and agreed to the published version of
the manuscript.
This project has received funding from the European GNSS Agency under the European Union’s Horizon 2020 research and innovation program under grant agreement No. 776402.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Restrictions apply to the availability of these data. Data was obtained in collaboration with the Österreichische Bundesbahnen (ÖBB, Vienna, Austria) and are available on request from the
corresponding author only with permission of the ÖBB.
We would like to thank the ÖBB for providing the measurement train and for guidance and support during the measurement campaign. We would like to thank the two anonymous reviewers for their valuable
comments and advice, which helped to improve the manuscript.
Conflicts of Interest
The authors declare no conflict of interest.
1. Krummenacher, G.; Ong, C.S.; Koller, S.; Kobayashi, S.; Buhmann, J.M. Wheel Defect Detection with Machine Learning. IEEE Trans. Intell. Transport. Syst. 2018, 19, 1176–1187. [Google Scholar] [
2. Nielsen, J.C.O.; Johansson, A. Out-of-round railway wheels—A literature survey. Proc. Inst. Mech. Eng. Part F J. Rail Rapid Transit 2000, 214, 79–91. [Google Scholar] [CrossRef]
3. Bian, J.; Gu, Y.; Murray, M.H. A dynamic wheel-rail impact analysis of railway track under wheel flat by finite element analysis. Veh. Syst. Dyn. 2013, 51, 784–797. [Google Scholar] [CrossRef] [
Green Version]
4. Bogacz, R.; Frischmuth, K. On dynamic effects of wheel-rail interaction in the case of Polygonalisation. Mech. Syst. Signal Process. 2016, 79, 166–173. [Google Scholar] [CrossRef]
5. Telliskivi, T.; Olofsson, U. Wheel-rail wear simulation. Wear 2004, 257, 1145–1153. [Google Scholar] [CrossRef]
6. Casanueva, C.; Enblom, R.; Stichel, S.; Berg, M. On integrated wheel and track damage prediction using vehicle–track dynamic simulations. Proc. Inst. Mech. Eng. Part F J. Rail Rapid Transit 2017,
231, 775–785. [Google Scholar] [CrossRef]
7. Mosleh, A.; Montenegro, P.; Alves Costa, P.; Calçada, R. An approach for wheel flat detection of railway train wheels using envelope spectrum analysis. Struct. Infrastruct. Eng. 2020, 202, 1–20.
[Google Scholar] [CrossRef]
8. Molodova, M.; Li, Z.; Dollevoet, R. Axle box acceleration: Measurement and simulation for detection of short track defects. Wear 2011, 271, 349–356. [Google Scholar] [CrossRef]
9. Mori, H.; Sato, Y.; Ohno, H.; Tsunashima, H.; Saito, Y. Development of Compact Size Onboard Device for Condition Monitoring of Railway Tracks. J. Mech. Syst. Transp. Logist. 2013, 6, 142–149. [
Google Scholar] [CrossRef] [Green Version]
10. Lederman, G.; Chen, S.; Garrett, J.; Kovačević, J.; Noh, H.Y.; Bielak, J. Track-monitoring from the dynamic response of an operational train. Mech. Syst. Signal Process. 2017, 87, 1–16. [Google
Scholar] [CrossRef] [Green Version]
11. Baasch, B.; Roth, M.; Groos, J. In-service condition monitoring of rail tracks: On an on-board low-cost multi-sensor system for condition based maintenance of railway tracks. Int. Verk. 2018, 70,
76–79. [Google Scholar]
12. Niebling, J.; Baasch, B.; Kruspe, A. Analysis of Railway Track Irregularities with Convolutional Autoencoders and Clustering Algorithms. In Dependable Computing—EDCC 2020 Workshops; Bernardi, S.,
Vittorini, V., Flammini, F., Nardone, R., Marrone, S., Adler, R., Schneider, D., Schleiß, P., Nostro, N., Løvenstein Olsen, R., Eds.; Springer International Publishing: Cham, Switzerland, 2020;
pp. 78–89. [Google Scholar]
13. Li, C.; Luo, S.; Cole, C.; Spiryagin, M. An overview: Modern techniques for railway vehicle on-board health monitoring systems. Veh. Syst. Dyn. 2017, 55, 1045–1070. [Google Scholar] [CrossRef]
14. Bosso, N.; Gugliotta, A.; Zampieri, N. Wheel flat detection algorithm for onboard diagnostic. Measurement 2018, 123, 193–202. [Google Scholar] [CrossRef]
15. Heirich, O.; Steingass, A.; Lehner, A.; Strang, T. Velocity and location information from onboard vibration measurements of rail vehicles. In Proceedings of the 16th International Conference on
Information Fusion, Istanbul, Turkey, 9–12 July 2013; pp. 1835–1840. [Google Scholar]
16. Jia, S.; Dhanasekar, M. Detection of Rail Wheel Flats using Wavelet Approaches. Struct. Health Monit. 2016, 6, 121–131. [Google Scholar] [CrossRef]
17. Ye, Y.; Shi, D.; Krause, P.; Hecht, M. A data-driven method for estimating wheel flat length. Veh. Syst. Dyn. 2019, 213, 1–19. [Google Scholar] [CrossRef]
18. Bai, Y.; Yang, J.; Wang, J.; Li, Q. Intelligent Diagnosis for Railway Wheel Flat Using Frequency-Domain Gramian Angular Field and Transfer Learning Network. IEEE Access 2020, 8, 105118–105126. [
Google Scholar] [CrossRef]
19. Childers, D.G.; Skinner, D.P.; Kemerait, R.C. The cepstrum: A guide to processing. Proc. IEEE 1977, 65, 1428–1443. [Google Scholar] [CrossRef]
20. Oppenheim, A.V.; Schafer, R.W. From frequency to quefrency: A history of the cepstrum. IEEE Signal Process. Mag. 2004, 21, 95–106. [Google Scholar] [CrossRef]
21. Bogert, B.P.; Healy, M.J.; Tukey, J.W. The quefrency alanysis of time series for echoes: Cepstrum, pseudo-autocovariance, cross-cepstrum, and saphe cracking. In Proceedings of the Symposium on
Time Series Analysis, Brown-University, Providence, RI, USA, 11–14 June 1962; John Wiley & Sons: New York, NY, USA; London, UK, 1963; pp. 209–243. [Google Scholar]
22. Bracciali, A.; Cascini, G. Detection of corrugation and wheelflats of railway wheels using energy and cepstrum analysis of rail acceleration. Proc. Inst. Mech. Eng. Part F J. Rail Rapid Transit
1997, 211, 109–116. [Google Scholar] [CrossRef]
23. Teunissen, P.J.; Montenbruck, O. (Eds.) Springer Handbook of Global Navigation Satellite Systems; Springer International Publishing: Cham, Switzerland, 2017. [Google Scholar]
24. Groves, P.D. Principles of GNSS, Inertial, and Multisensor Integrated Navigation Systems, 2nd ed.; Artech House: Boston, MA, USA, 2013. [Google Scholar]
25. Gustafsson, F. Statistical Sensor Fusion; Studentlitteratur: Lund, Sweden, 2010. [Google Scholar]
26. Buttkus, B. Homomorphic Filtering—Theory and Practice. Geophys. Prospect. 1975, 23, 712–748. [Google Scholar] [CrossRef]
27. Ulrych, T.J. Application of homomorphic deconvolution to seismology. Geophysics 1971, 36, 650–660. [Google Scholar] [CrossRef]
28. Wu, T.; Thompson, D. Theoretical Investigation of Wheel/Rail Non-Linear Interaction due to Roughness Excitation. Veh. Syst. Dyn. 2000, 34, 261–282. [Google Scholar] [CrossRef]
Figure 1.
Different synthetic data models according to
Table 1
Figure 3. (a) Measurement train; (b) Multi-sensor box inside the wagon; (c) Accelerometer mounted at the axle box; (d) Global navigation satellite system (GNSS) antenna (yellow box).
Figure 5. Axle-box accelerations (ABA) data (solid blue line) and speed data (dashed red line) in the time domain.
Figure 7. Cepstrum analysis of 40-m-long track segments; (a) position of cepstrum peaks at navewumbers between two and four meters and (b) corresponding peak amplitude.
Model Number Wheel Flat Length in mm Wheel Flat Depth in mm Wheel Roughness Std in mm Track Roughness
Std in mm
1 50 0.3 - 0.01
2 50 0.6 - 0.01
3 100 0.3 - 0.01
4 50 0.3 0.01 0.01
5 50 0.3 0.05 0.01
6 50 0.3 0.01 0.002
Window Length in m Number of Windows Median Peak Position in m Percentage of “Correctly” Depicted Peaks Mean Amplitude of “Correctly” Depicted Peaks
10 2222 2.99 56.57 0.28
20 1112 2.99 79.77 0.52
40 557 2.99 86.71 0.53
80 279 2.99 86.02 0.52
160 140 2.99 89.29 0.51
320 71 2.99 84.51 0.56
640 36 2.99 88.89 0.52
1280 19 2.99 84.21 0.54
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http:/
Share and Cite
MDPI and ACS Style
Baasch, B.; Heusel, J.; Roth, M.; Neumann, T. Train Wheel Condition Monitoring via Cepstral Analysis of Axle Box Accelerations. Appl. Sci. 2021, 11, 1432. https://doi.org/10.3390/app11041432
AMA Style
Baasch B, Heusel J, Roth M, Neumann T. Train Wheel Condition Monitoring via Cepstral Analysis of Axle Box Accelerations. Applied Sciences. 2021; 11(4):1432. https://doi.org/10.3390/app11041432
Chicago/Turabian Style
Baasch, Benjamin, Judith Heusel, Michael Roth, and Thorsten Neumann. 2021. "Train Wheel Condition Monitoring via Cepstral Analysis of Axle Box Accelerations" Applied Sciences 11, no. 4: 1432. https:/
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
Article Metrics
|
{"url":"https://www.mdpi.com/2076-3417/11/4/1432","timestamp":"2024-11-03T10:07:02Z","content_type":"text/html","content_length":"429140","record_id":"<urn:uuid:a3130bd9-2c47-48d0-8b2e-3f2839384a98>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00819.warc.gz"}
|
Components of Functions in R Language - Learn R Programming
Components of Functions in R Language
Components of Function in R Language
Functions in the R Language are objects with three basic components. The components of functions in R language are:
Table of Contents
Let us discuss the components of functions in R language in detail.
Formal Argument in R
To learn about Formal argument in R language, see the post formal argument, also see the basics about functions in R Language.
Body of a Function
The body of a function is parsed R statements. The body of a function is usually a collection of statements in braces but it can be a single statement, a symbol, or even a constant.
Environment of a Function
The environment of a function is the environment that was active at the time that the function was created. The environment of a function is a structural component of the function and belongs to the
function itself.
A fourth component of a function in R language can be considered as a “Return Value” by a function.
Return Value of a Function
The last object called within a function is returned by the function and therefore available for assignment. Functions can return only a single value but in practice, this is not a limitation as a
list containing any number of objects can be returned. Objects can be returned visible or invisible. This option does not affect the assignment side but affects the way results are displayed when the
function is called.
y <- function(n){
out <- runif(n)
cat (head(out))
Functions Closures in R
A function closure or closure is a function together with a referencing environment. Almost all functions in R are closures as they remember the environment where they were created. The functions
that cannot be classified as closures, and therefore do not have a referencing environment, are known as primitives.
In R, internal functions are called the underlying C code. These sum() and c() are good cases in point:
When we call a function, a new environment is created to hold the function’s execution, and normally, that environment is destroyed when the function exists. But, if we define a function g() that
returns a f(), the environment where f() is created is the execution environment of g(), that is, the execution environment of g() is the referencing environment of f(). As a consequence, the
execution environment of g() is not destroyed as g() exists but it persists as long as f() exists. Finally as f() remembers all objects bound to its referencing environment f() remembers all objects
bound to the execution environment of g().
We can use the referencing environment of f(), to hold any of the objects and these will be available to f().
g <- function(){
y <- 1
x + y
f1 <- g()
Closures can be used to write functions that in turn closures can be used to generate new functions. This allows us to have a double layer of development: a first layer that is used to do all the
complex work in common to all functions and a second layer that defines the details of each function.
f <- function(i){
f1 <- f(1)
f2 <- f(2)
By understanding these components, one can effectively create and use functions to enhance one’s R programming.
Note that in R language:
• Functions can be nested within other functions.
• Functions can access variables from the environment where they are defined (lexical scoping).
• R provides many built-in functions for various tasks.
• One can create customized functions to automate repetitive tasks and improve code readability.
|
{"url":"https://www.rfaqs.com/advanced-r/functions/components-of-functions-in-r/","timestamp":"2024-11-15T01:09:23Z","content_type":"text/html","content_length":"188009","record_id":"<urn:uuid:e0d2a0b4-9650-4e49-9644-aac4de4c2cc0>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00604.warc.gz"}
|
The ins and outs of safety integrity level reliability calculations
The ins and outs of safety integrity level reliability calculations
One of the key elements required in the design of safety instrumented systems (SIS) is the engineering calculation of a reliability figure of merit called the safety integrity level (SIL) to
determine if the designed SIS meets the target SIL as specified in the system’s safety requirement specification (SRS).
IEC 61511-1, Clause 11.9.1 requires that the calculated failure measure of each safety instrumented function (SIF) shall be equal to, or better than, the target failure measure specified in the SRS.
Safety reliability calculations for low-demand systems relate SIL to the calculation of the probability of failure on demand average (PFDavg) or a required risk reduction (RRF). For SISs where the
demand rate is more than one per year, or for high or continuous demand systems, the average probability of failure on demand per hour (PFH) is used.
SIL calculations have received considerable coverage in the efforts of the S84 committee of the International Society for Automation. The group’s ISA TR84.00.02-2015 technical report covers current
industry practices, with a new version due to be issued in the near future. As an industry, we've been doing these types of calculations since the beginning of the SIS standards. And, while not
rocket science, they can be a complicated subject. It's beyond the scope of this article to cover this subject in depth. However, we will cover some of the basics and some advanced topics where work
is being done in this area.
All SIL calculations are based on statistical models. What we need to ensure is that the models are representative and not so complex that their usefulness is diminished. They can give us a
reliability figure of merit to work with and help us better understand the system from a reliability perspective. These calculations are a verification tool to help design a SIS that has a high
expectation of providing the required safety integrity level reliability as specified in the SRS, and meet verification requirements required by the IEC 61511-1 standard.
From a practical perspective, the calculations can provide an analysis of the SIS conceptual design, and help determine options to meet the SIL and at what proof-test interval. These calculations,
unfortunately, can be subject to abuse if calculation parameters are cherry picked. To combat this potential abuse, architectural constraints (hardware fault tolerance requirements) are in the
standard to help ensure the proper level of redundancy in the SIS design.
Primary methodologies
Most people use a commercial program to do SIL calculations, but one should always understand the underlying methodologies, their benefits and limitations. There are three common and generally
acceptable calculation methods: simplified equations, fault-tree analysis (FTA) and Markov modeling. The reliability block diagram is sometimes used, but to a lesser extent, as are some other
methodologies such as Petri nets.
The simplified equations are the simplest to understand and the easiest to implement in a spreadsheet, while the Markov model is the most complex. All will get the job done. The FTA and Markov model
methods are graphical in nature, but the graphical Markov model can be quite complex. This makes the FTA a better approach if a visual representation is desired, particularly for analysis of complex
systems. Further, FTA can typically provide a more detailed system design analysis. The main discussion in this article is based on simplified equations for low-demand systems in order to cover the
main discussion points without digging into the complexity of all the varied methodologies. A more detailed discussion on various calculation methodologies can be found in references 1, 2 and 3.
Failure rates
We estimate the failure rates of instruments and devices based on the number of failures in a group of hopefully representative instruments over time. We then use these failure rates in our
calculations for designing the SIS for our applications. We assume that our instruments will have the same or better failure performance than the group of instruments that the failure rate is based
Unfortunately, there are a number of failure-rate database sources, as well as many approval reports to choose from, which can give substantially different failure rate numbers (as discussed in
reference 4). The selected failure rates for the calculations should also reflect reasonable failure rates expected for the type of service the instruments will be installed in. Unfortunately, the
service used to develop database failure rates is often not well known, so for severe or difficult applications, failure rate numbers may not reflect actual performance.
Failure rates can be broken down into different types depending on what parameters are used in the calculation [1]. For simplicity of discussion, the failure rates in this article are divided into
two types, dangerous and safe. The accuracy of the failure rates depends on the component design and construction, its application, its service, and the environment in which it's installed and
maintained. The failure rate can have a significant uncertainty associated with it, which needs to be accounted for in the calculations, per IEC 61511-1. The key is the selection of a failure rate
that's representative of the device’s inherent reliability and service where it will operate.
Many of the example PFDavg equations for redundant configuration are based on the identical instruments in the redundancy equation, but they can be easily extended to different instruments [9].
Generally speaking, all the math currently used is based on the concept of random failure as the primary failure mode, which follows the exponential failure rate distribution model, primarily due to
the failure rate being considered constant during a device’s useful life for many of the devices in SIS service. The math lends itself well to the concept of periodic proof tests to calculate the
PFDavg over the lifecycle of the SIS. In reality, the exponential failure rate distribution only applies well to electronic devices, and is less applicable to devices that are mechanical or have a
significant portion that's mechanical, which can be significant for long test intervals. In those cases, the failure rate could be modeled as a random contribution plus a time-dependent contribution
shown in Equation 1.
For an exponential failure distribution, it can be shown that the reliability for a one-out-of-one (1oo1) arrangement without consideration of common cause, diagnostic coverage, mean time to repair
(MTTR) or other factors is:
And the failure probability is:
To determine the average PFD, the below equation for determining the average of a function can be used with the proof-test interval (TI) for t as the time period that the probability is averaged
This was painful enough; think about what the calculus for averaging with exponentials involves for redundant arrangements. There needs to be an easy way, and there is. We can use the Maclaurin
series below to estimate the solution (1– e^-λt) in a simpler form:
When the terms that don't significantly contribute to the PFD are removed (a good approximation is λTI < 0.1), this gives us the PFD, but we still need to get to the PFDavg. Why the average
probability? This is because we're trying to calculate the probability of failure on a demand, but we don't know when the device will fail, nor when a safety demand will occur, other than for our
calculation. We assume it can potentially occur during our defined test interval, making the average a better predictor of the probability of a failure during the test interval.
We can find the PFDavg by integrating to find the average of 1oo1 configuration with respect to time:
Now when we look at the PFDavg of a redundant arrangement, we have two options. The “average before” option, where the average probabilities of failure (PFDavg) for each redundant element are
calculated before they're multiplied together, or, “average after” option, where the probabilities are averaged after the PFDs are multiplied together. Both methods are acceptable, but each gives a
different answer. The “average after” method provides the more conservative answers and is commonly used in industry. This same question will come up when you're using FTA that uses the “average
before” method, when you have redundancies that use an AND gate or a voting gate.
For example, for a 1oo2 arrangement with same type of instrument, if we average before we multiply the individual PFDs, we get:
While for a 1oo2 arrangement of instruments, if we average after we multiply the individual PFDs, we get:
If we extend these equations to a general “average after” equation for MooN redundancies, we get:
Where: R = N – M + 1.
The PFDavg calculation for the system must include all the components that could have a dangerous failure and defeat the SIS. These basic, simplified equations, however, don't tell the whole story,
particularly when we're talking about redundant configurations.
Common cause
For redundant configurations, the potential for a common-cause failure (CCF) can significantly contribute to the PFDavg. CCF is a term used to describe random and systematic events that can cause
multiple devices to fail simultaneously. The likelihood of multiple failures becomes more likely as the proof-test interval gets longer. In simplified equations, the probability of a CCF is commonly
modeled as a percentage of the devices’ base failure rate (the term β is commonly used for this percentage), and the CCF probability contribution is added to the redundant PFDavg. For a MooN
low-demand configuration, the general equation for redundant components with the common-cause contribution is:
[Eq 13]
From this equation, we can see that common cause can easily dominate the equation if β is large enough. If the CCF is significant, it may be better to improve the design with diversity, rather than
compensate by calculation. In addition, the determination of β is an estimate that can have a significant uncertainty associated with it.
Partial testing
There are several, general types of partial testing that affect SIL calculations. The first is where a defined portion of a proof test is performed online with the balance of the test performed
offline. The purpose of partial testing can be to meet a PFDavg or allow lengthening the offline proof test interval. The primary example of this is partial-stroke testing of shutdown valves. For a
1oo1 valve configuration, the
PFDavg equation is:
Where: TC = test coverage, TI = manual proof test interval, PTI = partial test interval and λ[d] = dangerous failure rate.
Another type of partial testing results from imperfect proof testing, which is where the SIS proof test doesn't fully test for all the dangerous failure modes. IEC 61511-1 requires consideration of
proof-test coverage in the calculations. The percentage of dangerous failures detected by the proof test is called the proof-test coverage (PTC). This effect of this on the PFDavg can be calculated
as follows:
Where: PTC = proof-test coverage, TI = manual proof test interval, MT = mission time (interval point where the SIS is restored to original, full functionality through overhaul or replacement) and λ
[d] = dangerous failure rate.
Note that using this calculation is essentially adding a random failure probability to a failure probability caused by a systematic failure, which is sort of like adding apples and oranges. When the
result is within an acceptable SIL band, this can result in the plant living with a percentage of undetected dangerous failures that aren't tested for over the mission time.
Would you want to do this? If the calculation results in moving outside of the acceptable SIL band, what do you do? One poor design choice is to test the things you're already testing more often to
make the PFDavg fit the desirable SIL band (e.g., using the random part of the equation to fix a systematic problem). This seems contrary to designing a reliable SIS system. Rather, the best approach
is to improve your PTC or at least reduce mission time.
Diagnostics is another form of partial testing where the failure rates are divided into failures detected or not detected by the device’s internal diagnostics or external diagnostics. The effect of
this on the PFDavg can be shown using equation 14, where the diagnostic coverage (DC) is substituted for the TC and the diagnostic test interval for the PTI. This equation can be used to cover
external diagnostics in FTA. Care should be taken where credit is given to the diagnostics to improve the PFDavg that dangerous detected failures are converted to “safe” failures by the design of the
system. The TC, DC and PTC are estimated values, and there's an uncertainty associated with their determination.
Site safety index
Site safety index (SSI) is a quantitative model that allows the impact of potential “systematic failures” like poor testing to be included in SIL verification calculations. SSI has five levels from
“SSI-0 = none” to “SSI-4 = perfect,” and the use of SSI in the PFDavg calculation results in a multiplicative factor where anything less than SSI-4 will cause the PFDavg calculation to have a higher
PFDavg than the basic calculation. (5)
Again, using this kind of model to compensate for poor systematic performance rather than improve that performance seems contrary to designing a reliable SIS and the work processes that support that
Proof test intervals
The off-line proof test interval is a key parameter and is typically selected based on a turnaround frequency, while the on-line test frequency, if any, is typically determined by the PFDavg
requirements. In practice, turnarounds typically occur in the year they're projected, but can vary within that year due to operating and economic conditions. If your plant turnaround schedule has
varied historically, an analysis of the effect on the PFDavg should be done.
Stronger considerations of the uncertainty associated with SIL calculations is a major addition for the next edition of the ISA TR84.00.02 technical report. We must recognize there's uncertainty
associated with the reliability parameters. 61511-1 states, “The reliability data uncertainties shall be assessed and taken into account when calculating the failure measure.” The technical report
provides a number of ways to compensate for uncertainty, primarily for the failure rate, which are mentioned below, but a detailed discussion is beyond the scope of this article.
Two methods that have been used to compensate for uncertainty are to set a value for the target PFDavg that shifts the target PFDavg to a more conservative value (e.g. applying a safety factor) and
variance contribution analysis. These methods, plus the use of CHI-squared distributions are discussed in References 1, 6 and 7.
One could also use a Bayesian approach with field data or other data sources to improve the failure rate parameters, which is discussed in Reference 8. Also, see Stephen Thomas’ discussion of this
and other SIS topics, which can be found at his www.SISEngineer.com website.
Relevance of SIL calculations
SIL calculations are an engineering tool that can assist us in the design of SIS and is required by IEC 61511-1. But they're not the end all in designing a reliable SIS. There's an ongoing trend to
consider additional factors in the calculation—leading to more complex calculations—with the assumption that if we factor in more things, we'll get a more accurate result, leading to a better design.
However, this may not always be the case. Consideration of these additional factors typically results in a higher PFDavg, which when the PFDavg crosses the SIL lines, leads to changes in the design
or the test frequencies, but not necessarily a more reliable system. Consideration of the additional parameters can also be abused to compensate for a bad design by allowing a poor design to remain
if the calculation doesn't drop you down a SIL band.
Many of these factors also involve mixing random failure with systematic failure in the calculation, which can lead to fixing the systematic portion of the calculation by modifying the random
portion, which doesn't fix the problem. An example can be “fixing” poor proof-test coverage by testing the system more often, which still leaves the dangerous failures not tested for still hanging
out there, latent, to come back and haunt you. The solution is to improve the test coverage and not compensate for it.
Uncertainty is another area where having poor data is compensated for by increasing the failure rate. This can result in added redundancy or more testing, but doesn't fix the inherent problem of
having bad data to begin with.
SIL calculations should never be used to compensate for a bad design. Consideration of these additional factors shouldn't be used to compensate for a poor design, but rather to identify areas where
the design needs improvement.
While there is a need to achieve a quantitative metric for a SIL rating based on the IEC 61511-1, one should use common sense and good engineering practice in these calculations. Remember, a reliable
system will always result from an improved design, rather than accepting a less reliable system that the calculation says is acceptable. Calculations should be used as a tool that helps lead to a
reliable design, and not the sole “proof” that you've achieved one.
1. ISA TR84.00.02 – Safety integrity level (SIL) – verification of safety instrumented functions," 2015, current draft.
2. Safety instrumented systems – a lifecycle approach, Paul Gruhn, Simon Lucchini, ISA, 2019.
3. Safety instrumented system verification, William M. Goble and Harry Cheddie, ISA, 2005.
4. “Will the real reliability stand up?,” William L. Mostia, Jr. PE, Texas A&M
5. Instrument symposium for the process industries, 2017.
6. “Assessing safety culture via site safety index”, Julia V. Bukowski, Denise Chastain-Knight, 12th Global Congress on Process Safety, 2016.
7. “Evaluation of uncertainty in safety integrity level calculations,” Freeman, Raymond and Angela Summers, Process Safety Progress, Wiley, 2016.
8. “General method for uncertainty evaluation of SIL calculations – part 2, analytical methods,” Raymond “Randy” Freeman, American Institute of Chemical Engineers, 2017.
9. “A hierarchical bayesian approach to IEC 61511 prior use,” Stephen L. Thomas, P.E., Spring Meeting and 14th Global Congress on Process Safety, 2018.
10. “Easily assess complex safety loops,” Dr. Lawrence Beckman, CEP, March 2001.
About the author
Frequent Control contributor William (Bill) Mostia, Jr., P.E., is prinpal, WLM Engineering, and can be reached at [email protected]
Sponsored Recommendations
|
{"url":"https://www.controlglobal.com/protect/safety-instrumented-systems/article/11294331/the-ins-and-outs-of-safety-integrity-level-reliability-calculations","timestamp":"2024-11-07T17:28:20Z","content_type":"text/html","content_length":"406454","record_id":"<urn:uuid:b164d277-3537-4584-8316-4216227462d2>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00062.warc.gz"}
|
Re: equations
• To: mathgroup at smc.vnet.net
• Subject: [mg32044] Re: equations
• From: Daniel Lichtblau <danl at wolfram.com>
• Date: Thu, 20 Dec 2001 03:42:07 -0500 (EST)
• References: <BD4BAAFD-F42E-11D5-A0A7-00039311C1CC@tuins.ac.jp>
• Sender: owner-wri-mathgroup at wolfram.com
Andrzej Kozlowski wrote:
> I am sure you must be right. I guessed I was confused because I have
> never considered carefully how root isolation and their numbering as
> Root[f,1], ....Root[f,n] work. Of course we know that :
> In[2]:=
> Solve[a + b*x^2 + c*x^3 + d*x^4 + e*x^5 == 0, x]
> Out[2]=
> {{x -> Root[a + b*#1^2 + c*#1^3 + d*#1^4 + e*#1^5 & , 1]},
> {x -> Root[a + b*#1^2 + c*#1^3 + d*#1^4 + e*#1^5 & , 2]},
> {x -> Root[a + b*#1^2 + c*#1^3 + d*#1^4 + e*#1^5 & , 3]},
> {x -> Root[a + b*#1^2 + c*#1^3 + d*#1^4 + e*#1^5 & , 4]},
> {x -> Root[a + b*#1^2 + c*#1^3 + d*#1^4 + e*#1^5 & , 5]}}
> means no more than that there are 5 roots of a fifth degree equation. At
> this point the ordering of the roots is purely formal. It's only when
> you substitute values for the parameters than Mathematica isolates the
> roots and the ordering acquires a meaning (so one can think of these
> "solutions" as (topologically badly behaved) functions of the parameters
> that return roots). I was not sure that this however would be remain
> true when you have a complex system of equations and solutions involve
> root objects with coefficients that themselves involve root objects and
> so on. It does not seem quite obvious to me that a root of a system of
> equations can be consistently represented as something like Root[f, 5]
> where f again involves other root objects, in such a way that this
> returns a root of the system for all values of the parameters (it's the
> choice of numbering that worried me). Presumably there is some theorem
> here, perhaps a rather trivial one. On the other hand, it now seems that
> nothing very sophisticated is going on in such a case, a suitable
> Groebner basis is found which eliminates variables in turn and then the
> roots are simply numbered purely formally, just as in the above
> example. Still, I am puzzled by Fred's claim that the solutions
> obtained by means of ELiminate in th eexample that started this
> discussion do not work for some values of the parameters (I may be still
> misunderstanding him). Since I think they must be solutions (by
> elimination theory) I assumed that was something to do with the above
> discussion. Of course it may be due to something quite different, for
> example problems with precision in numerical computations.
> Andrzej Kozlowski
> Toyama International University
> JAPAN
> http://platon.c.u-tokyo.ac.jp/andrzej/
Let me make two clarifications (well, I hope they are seen as such).
(1) Whatever I said applies under the assumption that we begin with
polynomials in the variables for which we solve (as opposed to having
them appear, for example, inside radicals). Symbolic parameters are on a
different footing. They may live inside radicals, Root[] objects,
nestings of the above, and so forth. As you say, the solutions expressed
in terms of Root[] functions that themselves involve such things are
just formal objects, and indeed these formal parametrized solutions are
not (in general) topologically well behaved functions of the underlying
parameters. Once parameters are assigned values e.g. via ReplaceAll, you
get algebraic numbers that are well behaved entities.
(2) I believe Fred Simons referred to the following situation. You are
given a system in n variables that has finitely many solutions. You
eliminate all but one to get a univariate polynomial in the remaining
variable, solve it, and so obtain solutions in that variable. Now do the
same thing for all other variables. You then have a problem: how to
patch solutions in the various variables together to get solutions in
all variables at once.
To get some idea of how hopeless this is I will point out two issues.
(i) Take the "generic" setting (radical ideal, general position with
respect to all variables, I'm not going to define what all that means;
those who do not know but anyway read this far should just take my word
that it is a common scenario). In this case any lexicographic Groebner
basis will look like
polynomial in x[n] of degree d
x[n-1] - poly in x[n] of degree < d
poly in x[1] of degree < d.
In particular we have d solutions. If you eliminated all variables but
x[n] then you get the d roots in that variable. Now do the same for each
other variable. You obtain exactly d roots for each. So we have d^n
possible ways to combine roots from various solutions, but only d give
actual solutions to the system. What a headache. In fact I believe it
was this issue that lead to the so-called FGLM conversion method for
converting from a differently ordered Groebner basis to lexicographic
(knowing from earlier work by Buchberger how to find the univariate
polynomials helped alot in coming up with the complete conversion).
(ii) This next issue is, I think, what has led to some of your
questions. The possibility of combining solutions in individual
variables, discussed in (i) above, is utterly hopeless if they happen to
be these formalized Root[] functions in parameters. The reason, as you
suspect, is that we do indeed have a "branch jumping" problem (or so I
am fairly certain). In other words, some parameter values would cause us
to patch together certain univariate solutions, but these would need to
change for other parameter values.
This problem does not arise if we solve for all variables at once using
either lexicographic Groebner basis with back substitution, or the
eigensystem approach I referred to in a previous post; thos methods will
give a needed consistency by forcing individual solutions to be in terms
of a specific parametrized Root[] function. That is why I can claim that
it all (magically) works out. I certainly agree, however, that one
cannot hope to patch together parametrized solutions in individual
Daniel Lichtblau
Wolfram Research
|
{"url":"https://forums.wolfram.com/mathgroup/archive/2001/Dec/msg00300.html","timestamp":"2024-11-07T16:30:23Z","content_type":"text/html","content_length":"35998","record_id":"<urn:uuid:ea0ceacb-92da-4fe1-b8ac-1d8b66508f64>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00759.warc.gz"}
|
Exponential VWAP Indicator for NinjaTrader - Free Indicators
Exponential VWAP Indicator
Two of the most common indicators I see used by traders are the EMA (Exponential Moving Average) and VWAP (Volume Weighted Average Price). Typically, traders prefer the EMA to a standard moving
average because it will react faster to price movements, causing less lag than a standard moving average. Because the VWAP indicator uses a standard average calculation, we can remove some of the lag
in this indicator by simply making it use an exponential formula. Our new Exponential VWAP indicator does just that, removing the lag that is present in standard VWAP indicators!
VWAP Indicator Comparison
The chart above shows three moving averages, all of which are set to use a period of 30 bars for their calculations. The EMA appears to have the most lag, and is the only indicator that does not take
volume in to account. The VWAP (or VWMA) looks to have less lag than the EMA, and the Exponential VWAP has the least lag of all three indicators. Are you intrigued?
Download Now:
Click the link below to download our Free Exponential VWAP indicator for NinjaTrader. The download contains unlocked code, so you are able to easily modify it as needed. If you need assistance
modifying the code, please contact our very experienced NinjaTrader programmer .
NinjaTrader 7:
Exponential VWAP Indicator for NinjaTrader 7 (6528 downloads)
NinjaTrader 8:
Exponential VWAP Indicator for NinjaTrader 8 (7178 downloads)
7 comments
1. THANKS WILL TRY IT
2. Thanks.
3. will this work in ninjatrader 8?
1. The indicator is not yet available for NinjaTrader 8. If you join our mailing list, you will receive notification as soon as it is available.
4. Thank you very much for such a great work. This is by far the best moving average I have ever used.
I would like to make my own version of this indicator in Python. Does anybody knows how to calculate the exponential VWAP?
I am asking because I am a beginner C# programmer so even though I opened the NinjaScript code of this indicator, still I couldn’t figure it out. There are two particular lines I don’t
alpha = 1-2/((double)Period+1);
I guess this is defining alpha as a constant of double precision type which is always going to contain the result of this mathematical operation:
I guess, but I am not certain about it.
On top of that there is the fact that the indicator has one plot defined this way:
AddPlot(Brushes.Blue, “VWEMA”);
But the plot VWEMA[0] never gets a value, still it draws the indicator on the chart. To me, it looks like magic.
So I gave up and I am looking for the formula to calculate the Exponential VWAP so I can make my own version of this indicator in Python.
Any help would be greatly appreciated.
Ivan Gil
5. can i download
your chart in tradingvew???
1. Unfortunately the indicator does not work in TradingView.
This site uses Akismet to reduce spam. Learn how your comment data is processed.
|
{"url":"https://www.freeindicators.com/ninjatrader/exponential-vwap-indicator/","timestamp":"2024-11-03T13:07:07Z","content_type":"text/html","content_length":"102510","record_id":"<urn:uuid:7c00743f-6cfe-4479-a445-027035df602b>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00393.warc.gz"}
|
Regression in Machine Learning
In statistical modeling, regression analysis estimates the relationship between one or more independent variables and the dependent variable which represents the outcome.
To explain with an example you can imagine a list of houses, with information regarding to the size, distance to city center, garden (independent variables). Using these information, you can try to
understand how the price(dependent variables) changes.
So for a regression analysis we have a set of observations or samples with one or more variables/features. Then, we define a dependent variable (the outcome) and try to find a relation between the
dependent variables and the independent variables. The best way to do this is by finding a function which best represent the data.
Linear Models for Regression
In the linear regression model, we will use regression analysis to best represent the dataset through a linear function. Then, we will use this function to predict the outcome of a new sample/
observation which was not in the dataset.
Linear regression is one of the most used regression models due to its simplicity and ease of understanding the results. Let’s move on to the model formulations to understand this better.
The linear regression function is written assuming a linear relationship between the variables:
where w terms are the regression coefficients, x terms are the independent variables or features, y is dependent variable/outcome and c is the constant bias term.
We can write a simple linear regression function for the houses examples we mentioned above.
So if we plug in the features of a new house into this function, we can predict its price (let’s assume size is 150m2 and distance to city center is 5 km).
See how the coefficient of distance to city center is minus. Meaning closer to center, more expensive the house will be.
We can create a simple fake regression dataset with only one feature and plot it to see the data behaviour more clearly.
import matplotlib.pyplot as plt
from sklearn.datasets import make_regression
plt.title('Samples in a dataset with only one feature (dependent variable)')
X, y = make_regression(n_samples = 80, n_features=1,
n_informative=1, bias = 50,
noise = 40, random_state=42)
plt.scatter(X, y, marker= 'o', s=50)
The dataset above has only one dependent variable. In this case, the regression function would be:
where w1 would be the slope the curve and c would be the offset value.
When we train our model on this data, the coefficients and the bias term will be determined automatically so that the regression function best fits the dataset.
The model algorithm finds the best coefficients for the dataset by optimizing an objective function, which in this case would be the loss function. The loss function represents the difference between
the predicted outcome values and the real outcome values.
Least-Squared Linear Regression
In the Least-Squared linear regression model the coefficients and bias are determined by minimizing the sum of squared differences (SSR) for all of the samples in the data. This model is also called
Ordinary Least-Squares.
If we interpret the function, it is a function determined by taking the square of the difference between the predicted outcome value and the real outcome value.
Let's train the Linear Regression model using the fake dataset we previously created and have a look at the calculated coefficients.
sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y,
random_state = 42)
model = LinearRegression()
model.fit(X_train, y_train)
print('feature coefficient (w_1): {}'.format(model.coef_))
print('intercept (c): {:.3f}'.format(model.intercept_))
print('R-squared score (training):{:.3f}'.format(model.score(X_train, y_train)))
print('R-squared score (test): {:.3f}'.format(model.score(X_test, y_test)))
Here, R^2 is the coefficient of determination. This term represents the amount of variation in outcome(y) explained by the dependence on features (x variables). Therefore, a larger R^2 indicates a
better model performance or a better fit.
When R^2 is equal to one, then RSS is equals to 0. Meaning the predicted outcome values and the real outcome values are exactly the same. We will be using the R^2 term to measure the performance of
our model.
plt.scatter(X, y, marker= 'o', s=50, alpha=0.7)
plt.plot(X, model.coef_*X + model.intercept_, 'r-')
plt.title('Least-squares linear regression model')
plt.xlabel('Variable/feature value (x)')
plt.ylabel('Outcome (y)')
Ridge Regression - L2 Regularization
Ridge regression model calculates coefficients and the bias (w and c) using the same criteria in Least-Squared however with an extra term.
This term is a penalty to adjust the large variations in the coefficients. The linear prediction formula is still the same but only the way coefficients are calculated differs due to this extra
penalty term. This is called regularization. It serves to prevent overfitting by restricting the variation of the coefficients which results in a less complex or simpler model.
This extra term is basically the sum of squares of the coefficients. Therefore, when we try to minimize the RSS function, we also minimize the the sum of squares of the coefficients which is called
L2 regularization. Moreover, the alpha constant serves to control the influence of this regularization. This way, in comparison to the Least-Squared model, we can actually control the complexity of
our model with the help of alpha term. The higher alpha term, higher the regularization is, and simpler the model will be.
The accuracy improvement with datasets including one dependent variable (feature) is not significant. However, for datasets with multiple features, regularization can be very effective to reduce
model complexity, therefore overfitting and increase model performance on test set.
Let's have a look at its implementation in python:
from sklearn import datasets
X,y = datasets.load_diabetes(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(X, y,random_state = 42)
from sklearn.linear_model import Ridge
model = Ridge()
model.fit(X_train, y_train)
print('feature coefficients: {}'.format(model.coef_))
print('intercept (c): {:.3f}'.format(model.intercept_))
alphas = [1,5,10,20,50,100,500]
features = ['w_'+str(i+1) for i,_ in enumerate(model.coef_)]
for alpha in alphas:
model = Ridge(alpha=alpha).fit(X_train,y_train)
plt.scatter(features,model.coef_, alpha=0.7,label=('alpha='+str(alpha)))
plt.legend(loc='upper left')
Normalization can be applied unfairly to the features when they have different scales (when one feature has values around 0-1 and the other has from 100-1000). This can cause inaccuracies in our
model when we apply regularization. In this case, feature scaling comes to our help to normalize all the values in the dataset, so that we can get rid of the scale differences. We will look in to
feature scaling in another section...
Lasso Regression - L1 Regularization
Lasso regression is also a regularized linear regression model. In comparison to Ridge regression, it uses L1 regularization as the penalty term while calculating the coefficients.
Let's have a look at how the RSS function looks like with the penalty term for L1 regularization.
The penalty term for L1 regularization is the sum of absolute values of the coefficients. Therefore, when the algorithm tries to minimize RSS, it enforces the regularization by minimizing the sum of
absolute values of the coefficients.
This results in coefficients of the least effective paramaters to be 0 which is kind of like feature selection. Therefore, it is most effectively used for datasets where there a few features with a
more dominant effect compared to others. This results in eliminating features which have a small effect by setting their coefficients to 0.
Alpha term is again used to control the amount of regularization.
from sklearn.linear_model import Lasso
model = Lasso()
model.fit(X_train, y_train)
print('feature coefficients: {}'.format(model.coef_))
print('intercept (c): {:.3f}'.format(model.intercept_))
After finding the coefficients of the dominant features, we can go ahead and list their labels.
import numpy as np
data = datasets.load_diabetes()
alphas = [0.1,0.5,1,2,5,10]
for alpha in alphas:
model = Lasso(alpha=alpha).fit(X_train,y_train)
print('feature coefficients for alpha={}: \n{}'.format(alpha,model.coef_))
print('R-squared score (training): {:.3f}'.format(model.score(X_train, y_train)))
print('R-squared score (test): {:.3f}\n'.format(model.score(X_test, y_test)))
Ridge or Lasso?
To sum up, it makes sense to use the Ridge regression model there are many small to medium effective features. If there are only a few dominantly effective features, use the Lasso regression model.
Polynomial Regression
Linear regression performs well on the assumption that the relationship between the independent variables (features) and the dependent variable(outcome) is linear. If the distrubtion of the data is
more complex and does not show a linear behaviour, can we still use linear models to represent such datasets? This is where polynomial regression comes in very useful.
To capture this complex behaviour, we can add higher order terms to represent the features in the data. Transforming the linear model with one feature:
Since the coefficients are related to features linearly, this is still a liner model. However, it contains quadratic terms and the curve fitted is a polynomial curve.
Let's continue with an example for Polynomial regression. To convert the features to higher order terms, we can use the PolynomialFeatures class from scikit-learn. Then we can use the Linear
regression model from before to train the model.
But before, let us create a dataset which could be a good fit for a 2nd degree function. For that we will use numpy to create random X points and plug them into a representative function.
X = 2 - 3 * np.random.normal(0, 1, 100)
y = X - 2 * (X ** 2) + np.random.normal(-3, 3, 100)
plt.scatter(X, y, s=10)
We can reshape the arrays we created so that we can feed them in to the model. First, we will train a LinearRegression model to see how it fits to this data.
X = X[:, np.newaxis]
y = y[:, np.newaxis]
model = LinearRegression()
print('feature coefficients: \n{}'.format(model.coef_))
print('R-squared score (training): {:.3f}'.format(model.score(X, y)))
plt.plot(X, model.coef_*X + model.intercept_, 'r-')
plt.scatter(X,y, s=10)
As expected, Linear Regression model does not provide a very good fit with the normal features for a dataset of this behaviour. Now, we can create 2nd order Polynomial features using the
PolynomialFeatures class from sk-learn library. Then, use these new 2nd order features to train the same linear regression model.
from sklearn.preprocessing import PolynomialFeatures
poly_features= PolynomialFeatures(degree=2)
X_poly = poly_features.fit_transform(X)
model = LinearRegression()
print('feature coefficients: \n{}'.format(model.coef_))
print('R-squared score (training): {:.3f}'.format(model.score(X_poly, y)))
plt.scatter(X,model.predict(X_poly),s=10,label="polynomial prediction")
plt.scatter(X,y,s=10,label="real data")
plt.legend(loc='lower left')
This time, we were able to obtain a very good fit using the same linear regression model but with 2nd order features obtained from the PolynomialFeatures class. This is a perfect example to show how
Polynomial Linear Regression can be used to obtain better fits with data which do not have a linear relationship between the features and the outcome value.
If you like this, feel free to follow me for more free machine learning tutorials and courses!
Top comments (0)
For further actions, you may consider blocking this person and/or reporting abuse
|
{"url":"https://practicaldev-herokuapp-com.global.ssl.fastly.net/berk_hakbilen/regression-in-machine-learning-100d","timestamp":"2024-11-08T06:33:18Z","content_type":"text/html","content_length":"94571","record_id":"<urn:uuid:eeb067a4-c608-438b-8c90-bd0ae92ca02b>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00628.warc.gz"}
|
Tatarski wave propagation in a turbulent medium pdf file
A study of phase variance article pdf available in the journal of the acoustical society of america 891. Backward wave propagation in lefthanded media with isotropic. Mar 22, 2001 propagation and
scattering of acoustic waves in a turbulent medium propagation and scattering of acoustic waves in a turbulent medium soczkiewicz, e chivers, r. A short theory of bulk wave propagation is given and
the important concepts of group and phase velocities are explained. Tatarski wave propagation in a turbulent medium 1961. The first page of the pdf of this article appears above. The effects of the
turbulent atmosphere on wave propagation.
On 24 forms of the acoustic wave equation in vortical flows. Introduction to wave propagation in a turbulent medium dtic. The observation fundamental to this work is that the ocean is usually in a
state of turbulent motion. Advanced computational techniques in geophysical sciences, barcelona, spain, july 2014 juan e. In this paper we analyze a wave based imaging modality called ghost imaging
that can produce an image of an object illuminated by a partially coherent source. Wave propagation in a turbulent medium, translated by r. For many systems applications an actual source might be
better described as a diffractionlimited beam with a gaussian transverse irradiance pro3 4 file. The propagation and spreading of a wave eld can be related to time reversal and refocusing of the wave
eld by a general duality relation. Tatarski wave propagation in a turbulent medium 1961 free ebook download as pdf file.
Wave propagation and turbulent media, modern analytic and computational methods in science and mathematics by adams, roy n and a great selection of related books, art and collectibles available now
at. Application of methods of quantum field theory to wave propagation in a random medium. Wave propagation in a turbulent medium dover books on. The theory is initially worked out in detail for the
propagation of transverse waves along an infinite stretched string whose density is a random function of position. Wave propagation in a turbulent medium download ebook pdfepub. Stanford libraries
official online search tool for books, media, journals, databases, government documents and more. Herein we derive an expression for direct determination of the geometric autocorrelation function w
of a polycrystalline material from images of its grain boundary network e. Simulation of the propagation of an acoustic wave through a turbulent velocity field. Part iii offers a detailed
presentation of lineofsight propagation of acoustic and electromagnetic waves through a turbulent medium. Part iv concludes the text with a comparison of theory with experimental data. This paper
discusses a general theory of wave propagation through a random medium whose random inhomogeneities are confined to small deviations from the mean. Wave propagation in anisotropic medium due to an
oscillatory. Department of electrical and computer engineering, university of toronto. If the wave speed is constant across different wave numbers, then no dispersion would occur.
In this work, an effective technique for the measurement of seismic wave. Iii optical propagation through the turbulent atmosphere. The image of the object is obtained by correlating the intensities
measured by two detectors, one that does not view the object and another one that does view the object. Click download or read online button to get wave propagation in a turbulent medium book now.
Analytical models are then developed to describe wave propagation in anisotropic materials. Valerian ilich tatarski was on the the staff of the institute of atmospheric physics of the ussr academy of
sciences. The long propagation paths involved in radio and stellar occultations by turbulent planetary atmospheres require that the classical, weak scattering.
Correspondingly, the value of the temperature at every. Numerical solutions of wave propagation in beams by ryan. Random functions and turbulence download ebook pdf, epub. Israel program of
scientific translations, jerusalem. Tatarski wave propagation in a turbulent medium 1961 scribd. These waves have a surprisingly narrow range of frequencies and vertical wavenumbers. However, if the
wave number is expressed as a nonconstant function of the wave speed, then the waves would disperse. Numerical solution of wave scattering problems in the. Pdf coherence of beam arrays propagating in
the turbulent. Other compression waves are seen to be still travelling through the domain. On the geometric autocorrelation function of polycrystalline. Tatarski, wave propagation in a turbulent
medium 4.
Wave propagation in fractured poroelastic media wccm, ms170. The diameter of the seeing disk, most often defined as the full width at half maximum fwhm, is a measure of the astronomical seeing
conditions. Other readers will always be interested in your opinion of the books youve read. The objective of this paper is to specify the source of noise in electromagnetic signal reception due to
turbulence in the flow about a high velocity flight vehicle. The author takes a systematic and in depth approach to answering both audiences, separately and jointly, by demonstrating a way to obtain
analytic answers, the integration method, and by developing a way to express solutions to electromagnetic wave propagation in turbulence problems in integral form.
Effects of the turbulent atmosphere on wave propagationtt6850464. In many real problems involving wave propagation in random media, such as those arising out of investigations of sound propagation in
the atmosphere or ocean, the propagation medium may be regarded as weakly inhomogeneous in the sense that it deviates only slightly from a uniform state. All books are in clear copy here, and all
files are secure so dont worry about it. Pdf simulation of the propagation of an acoustic wave. The effects of the turbulent atmosphere on wave propagation translated by israel program for scientific
translations. Oct 31, 20 initially, a brief background of the research in wave propagation in solids is laid out to provide a historical context for this thesis. Pdf propagation of electromagnetic
waves in kolmogorov. Wave propagation and multiple scattering in a random continuum. When the e ect of the turbulent medium is important, the turbulenceinduced timereversal aperture corresponds to a
timereversal resolution much better than the resolution in the absence of the turbulent medium. Wave propagation in a turbulent medium download ebook.
Publishers pdf, also known as version of record includes final page, issue and volume numbers. Brandon rodenburg,1 mohammad mirhosseini,1 and stephen m. Slawinski in partial fulfilment of the
requirement for the degree of doctor of philosophy. Tatarski considers a plane or spherical wave impinging on a slab of turbulent atmosphere whose statistics are known. Part ii, on the scattering of
waves in the turbulent atmosphere, is supplemented by an appendix on scattering of acoustic radiation. Wave propagation in anisotropic medium due to an oscillatory point source with application to
unidirectional composites james h. The effect of turbulence on the structure of weak shock waves is investigated. Please click button to get wave propagation in a turbulent medium book now.
Propagation of weak shock waves through turbulence. In order to facilitate understanding of the core material, they also address a number of related topics in conventional sensor array imaging, wave
propagation in random media, and highfrequency asymptotics for wave propagation. Effect of schmidt number on smallscale passive scalar. Aug 01, 2006 influence of reentry turbulent plasma fluctuation
on em wave propagation emphasis is placed on the effects of electron density fluctuations on electromagnetic wave propagation. Wave propagation in anisotropic media springerlink. The classical theory
of wave propagation in a turbulent medium. Passive imaging with ambient noise by josselin garnier. In order to analyze the influence of spectral powerlaw variations on the scintillation index of a
gaussian beam through nonkolmogorov moderatestrong turbulence, the scintillation index is plotted in fig. Influence of reentry turbulent plasma fluctuation on em wave. The propagation through a
turbulent medium refers to those cases where the wavelength of the optical wave is much smaller than a typical distance over which. Coherence of beam arrays propagating in the turbulent atmosphere.
Effects of the turbulent atmosphere on wave propagation.
Wave propagation in a turbulent medium dover books on physics. Wave propagation and underwater acoustics joseph b. Scintillation of a laser beam propagation through non. Parameter fluctuations of
acoustic waves propagating in a turbulent. In a typical astronomical image of a star with an exposure time of seconds or even minutes, the different distortions average out as a filled disc called
the seeing disc.
The angles of wave propagation from the vertical for the dominant waves lie in the range q54255. Kolmogorov the beamwidth spread and directionality of partially coherent hermiteganssian beams
propagating through nonkolmogorov atmospheric turbulence. Propagation of electromagnetic waves in kolmogorov and nonkolmogorov atmospheric turbulence. Velickovic 2 1 faculty of electronic engineering
nis,yugoslavia 2 eiprofessional electronic factory, nis yugoslavia abstract. Implications of the theory of turbulent mixing for wave propagation in media with. Wave propagation in a turbulent medium
physics today. Internal waves generated from a turbulent mixed region.
|
{"url":"https://schulrogosro.web.app/15.html","timestamp":"2024-11-07T16:32:50Z","content_type":"text/html","content_length":"14486","record_id":"<urn:uuid:b395d7f2-3819-4159-abee-5008e4f83ce1>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00115.warc.gz"}
|
The Bachelor of Science in Mathematics
The Bachelor of Science in Mathematics
Total Course Requirements for the Bachelor's Degree: 120 units
See Bachelor's Degree Requirements in the University Catalog for complete details on general degree requirements. A minimum of 39 units, including those required for the major, must be upper
A suggested Major Academic Plan (MAP) has been prepared to help students meet all graduation requirements within four years. You can view MAPs on the Degree MAPs page in the University Catalog or you
can request a plan from your major advisor.
General Education Pathway Requirements: 48 units
See General Education in the University Catalog and the Class Schedule for the most current information on General Education Pathway Requirements and course offerings.
This major has approved GE modification(s). See below for information on how to apply these modification(s).
• MATH 217 is an approved major course substitution for Critical Thinking (A3).
• MATH 330W is an approved major course substitution for Upper-Division Natural Sciences.
These modifications apply to The Option in Mathematics Education - Credential Path only
• EDTE 451 fulfills Learning for Life (E)
• EDTE 302, ENGL 471, and MATH 333 fulfill the Upper-Division Pathway requirement.
Diversity Course Requirements: 6 units
See Diversity Requirements in the University Catalog. Most courses taken to satisfy these requirements may also apply to General Education .
Upper-Division Writing Requirement:
Writing Across the Curriculum (Executive Memorandum 17-009) is a graduation requirement and may be demonstrated through satisfactory completion of four Writing (W) courses, two of which are
designated by the major department. See Mathematics/Quantitative Reasoning and Writing Requirements in the University Catalog for more details on the four courses. The first of the major designated
Writing (W) courses is listed below.
SUBJ NUM Title Sustainable Units Semester Offered Course Flags
MATH 330W Methods of Proof (W) 3.0 FS W
Prerequisites: GE Written Communication (A2) requirement and MATH 121.
A survey of elementary principles of logic, emphasizing the nature of proof. Standard methods of proof will be illustrated with examples from various branches of mathematics, including set theory and
the theory of functions and relations. Other possible sources of examples include the calculus, number theory, theory of equations, topology of the real line. 3 hours seminar. This is an approved
Writing Course. Formerly MATH 330. (005530)
The second major-designated Writing course is the Graduation Writing Assessment Requirement (GW) (Executive Order 665). Students must earn a C- or higher to receive GW credit. The GE Written
Communication (A2) requirement must be completed before a student is permitted to register for a GW course.
Grading Requirement:
All courses taken to fulfill major course requirements must be taken for a letter grade except those courses specified by the department as Credit/No Credit grading only.
Enrollment in any mathematics course requires a grade of C- or higher in all prerequisite courses or their transfer equivalents.
Course Requirements for the Major: 48-56 units
Completion of the following courses, or their approved transfer equivalents, is required of all candidates for this degree. Additional required courses, depending upon the selected option are
outlined following the major core program requirements.
Major Core Program: 26-29 units
5 courses required:
SUBJ NUM Title Sustainable Units Semester Offered Course Flags
MATH 120 Analytic Geometry and Calculus 4.0 FS GE
Prerequisites: GE Mathematics/Quantitative Reasoning Ready; both MATH 118 and MATH 119 (or college equivalent); first-year freshmen who successfully completed trigonometry and precalculus in high
school can meet this prerequisite by achieving a score that meets department guidelines on a department administered calculus readiness exam.
Limits and continuity. The derivative and applications to related rates, maxma and minima, and curve sketching. Transcendental functions. An introduction to the definite integral and area. 4 hours
discussion. This is an approved General Education course. (005506)
MATH 121 Analytic Geometry and Calculus 4.0 FS
Prerequisite: MATH 120.
The definite integral and applications to area, volume, work, differential equations, etc. Sequences and series, vectors and analytic geometry in 2 and 3-space, polar coordinates, and parametric
equations. 4 hours discussion. (005507)
MATH 235 Elementary Linear Algebra 3.0 FS
Prerequisites: MATH 121.
Matrices, determinants, cartesian n-space (basis and dimension of a subspace, rank, change of basis), linear transformations, eigenvalues. Numerical problems will be emphasized. 3 hours discussion. (
MATH 300 Undergraduate Mathematics Seminar 2.0 FS
Prerequisite: GE Mathematics/Quantitative Reasoning Ready.
This course is designed to expose you to mathematics not normally covered in your regular curriculum. Guest speakers are drawn from the ranks of our faculty, including other disciplines, our
students, and industry. Talks are interactive, participatory, and fun. There is no prerequisite, except an interest in interesting mathematics. Topics typically include selections from number theory,
math education, statistics, problem solving, undergraduate research, calculus, differential equations, spatial and planar geometry, probability, computer applications, mathematical operations,
modeling, topology, trigonometry, metric measurements, elliptical curves, and bubbles, among others. This exposure broadens your horizons and expands your curiosity in hopes that you will explore
mathematics beyond your required courses. 2 hours lecture. You may take this course more than once for a maximum of 8.0 units. Credit/no credit grading. (021647)
MATH 330W Methods of Proof (W) 3.0 FS W
Prerequisites: GE Written Communication (A2) requirement and MATH 121.
A survey of elementary principles of logic, emphasizing the nature of proof. Standard methods of proof will be illustrated with examples from various branches of mathematics, including set theory and
the theory of functions and relations. Other possible sources of examples include the calculus, number theory, theory of equations, topology of the real line. 3 hours seminar. This is an approved
Writing Course. Formerly MATH 330. (005530)
4-6 units selected from:
SUBJ NUM Title Sustainable Units Semester Offered Course Flags
MATH 220 Analytic Geometry and Calculus 4.0 FS
Prerequisites: MATH 121.
Vector functions and space curves. Functions of several variables, partial derivatives, and multiple integrals. Vector calculus line integrals, surface integrals, divergence/curl, Green's Theorem,
Divergence Theorem, and Stokes' Theorem. 4 hours discussion. (005508)
Or the following group of courses may be selected:
SUBJ NUM Title Sustainable Units Semester Offered Course Flags
MATH 125 Advanced Number and Operation 3.0 FA
Prerequisite: Successful completion of high school precalculus, concurrent enrollment in MATH 118 or 119, or faculty permission.
Investigate number and operation through calculation and abstraction, find patterns and relationships through computation, develop and test mathematical conjectures, and develop an appreciation of
proof and an ability to make mathematical arguments. Basic concepts from Number Theory are explored, culminating in proof of the Fundamental Theorem of Arithmetic and related theorems in other number
sets. 3 hours discussion. (021846)
MATH 225 Algebra Functions, Real and Complex Number Systems 3.0 SP
Prerequisite: MATH 125.
This course focuses on developing your abilities in making sense of algebraic manipulation in the context of functions, polynomial rings, and matrices. The course and the classroom are structured as
a supportive, collaborative learning environment in which mathematical discourse is valued and exploration encouraged. You will investigate algebra and polynomials through calculation and
abstraction, find patterns and relationships through computation, develop and test mathematical conjectures, and develop an appreciation of proof and an ability to construct mathematical arguments.
More advanced concepts from Number Theory are explored, culminating in proofs of the Unique Prime Factorization Theorem and the Division Algorithm for different rings. 3 hours discussion. (021953)
1 course selected from:
SUBJ NUM Title Sustainable Units Semester Offered Course Flags
MATH 420W Advanced Calculus (W) 3.0 FS GW W
Prerequisites: Completion of GE Written Communication (A2) requirement, MATH 220, MATH 330, upper-division standing.
Limits, continuity, uniform continuity, the definite integral, series, convergence, uniform convergence, and metric spaces. Differentiation and integration of functions of several variables.
Transformation of multiple integrals. 3 hours discussion. This is an approved Graduation Writing Assessment Requirement course; a grade of C- or higher certifies writing proficiency for majors. This
is an approved Writing Course. (005575)
MATH 425W Computational and Communication in Mathematical Modeling (W) 3.0 FA GW W
Prerequisites: GE Written Communication (A2) requirement, completion of computer literacy requirement, MATH 225, MATH 235, MATH 330W, and upper division standing.
In this course, intended for pre-service teachers, student experience mathematical modeling with content common in the secondary setting (algebra through calculus) as well as from their undergraduate
coursework and develop and produce formal modeling reports. Students use technology to aid in exploring real-world circumstances, make sense of and analyze existing models, and develop their own
mathematical models. 3 hours discussion. This is an approved Graduation Writing Assessment Requirement course; a grade of C- or higher certifies writing proficiency for majors. This is an approved
Writing Course. (021977)
The MATH 120, MATH 121, MATH 220 sequence should be started as early as possible, provided the student has the necessary background. MATH 118 and MATH 119 (or their equivalents) are required
pre-calculus courses for MATH 120.
Some upper-division courses require only MATH 120 or MATH 121 as a prerequisite. Refer to catalog course listings when choosing courses.
Computer Literacy Requirement
A passing grade in one of the following classes or its transfer equivalent.
1 course selected from:
SUBJ NUM Title Sustainable Units Semester Offered Course Flags
CINS 110 Introductory Web Programming 3.0 FS
This course introduces students to programming in the context of dynamic web page development. The operation of the web browser and its interaction with web servers is explored. Structure and style
of web page content using HTML and CSS is introduced. The main focus of the course is programming in JavaScript to add dynamic content to a web page. Topics include all language constructs,
interaction with the DOM, event-driven programming, debugging using an integrated debugger in the browser, and the use of existing APIs. 2 hours discussion, 2 hours activity. (002298)
CSCI 111 Programming and Algorithms I 4.0 FS
Prerequisite: MATH 109, MATH 119 (or high school equivalent), or MATH 120; or a passing score on the Math department administered calculus readiness exam.
A first-semester programming course, providing an overview of computer systems and an introduction to problem solving and software design using procedural object-oriented programming languages.
Coverage includes the software life cycle, as well as algorithms and their role in software design. Students are expected to design, implement, and test a number of programs. 3 hours lecture, 2 hours
activity. (002281)
MATH 230 An Introduction to Computational Mathematics 3.0 FA
Prerequisites: MATH 121, no previous computer experience required.
An introduction to the use of mathematical computer software. This course provides an introduction to a programming environment, preparing math majors to use computers to explore and solve varied
math problems. The software used in this class depends on the instructor and may be chosen from Mathematica, GP/PARI, GAP, SAS, R, etc. This course satisfies the computer literacy requirement for
mathematics majors. 3 hours discussion. You may take this course more than once for a maximum of 9.0 units. (005526)
Major Option Course Requirements: 22-27 units
The following courses, or their approved transfer equivalents, are required dependent upon the option chosen. Students must select one of the following options for completion of the major course
requirements. Use the links below to jump to your chosen option.
The Option in General Mathematics: 24-26 units
3 courses required:
SUBJ NUM Title Sustainable Units Semester Offered Course Flags
MATH 421 Advanced Calculus 3.0 SP
Prerequisite: MATH 420W.
Continuation of MATH 420W. 3 hours discussion. (005576)
MATH 449 Modern Algebra 3.0 FA
Prerequisites: MATH 220, MATH 235, MATH 330.
Introduction to basic algebraic structures such as groups, ring, and fields. The fundamental concepts of homomorphism, subgroup, normal subgroup and factor group of a group as well as subring, ideal
and factor ring of a ring; permutation groups and matrix groups. 3 hours discussion. (005582)
MATH 465 Introduction to Complex Variables 3.0 FA
Prerequisites: MATH 220.
Algebra of Complex Numbers, Cauchy-Riemann Equations, the exponential, trigonometric, and logarithmic functions, complex integration and Cauchy integral formula, Taylor and Laurent series, the
residue theorem, conformal mapping, and applications. 3 hours discussion. (005577)
1 course selected from:
SUBJ NUM Title Sustainable Units Semester Offered Course Flags
MATH 260 Elementary Differential Equations 4.0 FS
Prerequisites: MATH 121.
First order separable, linear, and exact equations; second order linear equations, Laplace transforms, series solutions at an ordinary point, systems of first order linear equations, and
applications. 4 hours discussion. (005509)
MATH 350 Introduction to Probability and Statistics 3.0 FA
Prerequisites: MATH 121.
Basic concepts of probability theory, random variables and their distributions, limit theorems, sampling theory, topics in statistical inference, regression, and correlation. 3 hours discussion. (
1 course selected from:
SUBJ NUM Title Sustainable Units Semester Offered Course Flags
MATH 337 Introduction to the Theory of Numbers 3.0 FA
Prerequisites: MATH 121, MATH 330.
Basic properties of the integers, division algorithm, fundamental theorem of arithmetic, number-theoretic functions, Diophantine equations, congruences, quadratic residues, continued fractions. 3
hours discussion. (005585)
MATH 344 Combinatorial Mathematics and Graph Theory 3.0 FA
Prerequisites: MATH 121, MATH 330.
The analysis of mathematical and applied problems through the use of permutations and combinations, generating functions and recurrence relations. Directed graphs, trees, connectivity, and duality. 3
hours discussion. (005591)
1 course selected from:
SUBJ NUM Title Sustainable Units Semester Offered Course Flags
MATH 428 Differential Geometry 3.0 F1
Prerequisites: MATH 220, MATH 330.
The geometry of curves and surfaces in Euclidean 3-space. 3 hours lecture. (005566)
MATH 437 Topology 3.0 F2
Prerequisites: MATH 220, MATH 330.
Metric spaces, continuous functions, homeomorphisms, separation, and covering axioms, connectedness. 3 hours discussion. (005563)
1 course selected from:
SUBJ NUM Title Sustainable Units Semester Offered Course Flags
MATH 435 Linear Algebra 3.0 S2
Prerequisites: MATH 220, MATH 235, MATH 330.
Vector spaces, linear operators, bilinear forms and scalar products, unitary spaces; matrix polynomials, eigenvalues, and Jordan normal form. 3 hours discussion. (005581)
MATH 451 Modern Algebra II 3.0 S1
Prerequisite: MATH 449.
Continuation of MATH 449, topics may include group actions, the Sylow theorems, number fields, finite fields, algebraic extensions, field automorphisms, splitting fields of polynomials, Galois
groups, and solvable groups. 3 hours discussion. (021971)
3-4 units selected from:
Any upper-division Mathematics (MATH) courses except MATH 305, MATH 310, MATH 311, MATH 341, MATH 342, and MATH 441.
The Option in Applied Mathematics: 25 units
7 courses required:
SUBJ NUM Title Sustainable Units Semester Offered Course Flags
MATH 260 Elementary Differential Equations 4.0 FS
Prerequisites: MATH 121.
First order separable, linear, and exact equations; second order linear equations, Laplace transforms, series solutions at an ordinary point, systems of first order linear equations, and
applications. 4 hours discussion. (005509)
MATH 350 Introduction to Probability and Statistics 3.0 FA
Prerequisites: MATH 121.
Basic concepts of probability theory, random variables and their distributions, limit theorems, sampling theory, topics in statistical inference, regression, and correlation. 3 hours discussion. (
MATH 360 Ordinary Differential Equations 3.0 SP
Prerequisites: MATH 260.
Systems of first order linear equations, existence and uniqueness theorems, stability, Sturm separation theorems, power series methods. 3 hours discussion. (005538)
MATH 361 Boundary Value Problems and Partial Differential Equations 3.0 FA
Prerequisites: MATH 260.
Partial differential equations, separation of variables, orthogonal sets of functions, Sturm-Liouville problems, Fourier series, boundary value problems for the wave equation, heat equation, and
Laplace equation; Bessel functions, Legendre polynomials. 3 hours discussion. (005540)
MATH 461 Numerical Analysis 3.0 SP
Prerequisites: MATH 220 or MATH 260; completion of computer literacy requirement.
Approximation; numerical integration; numerical solution of ordinary and partial differential equations; interpolation and extrapolation. 3 hours discussion. (005584)
MATH 465 Introduction to Complex Variables 3.0 FA
Prerequisites: MATH 220.
Algebra of Complex Numbers, Cauchy-Riemann Equations, the exponential, trigonometric, and logarithmic functions, complex integration and Cauchy integral formula, Taylor and Laurent series, the
residue theorem, conformal mapping, and applications. 3 hours discussion. (005577)
MATH 480 Mathematical Modeling 3.0 SP
Prerequisites: MATH 235, MATH 260.
The translation of real world phenomena into mathematical language. Possible applications include population and competing species models, mathematical theories of war, traffic flow, river pollution,
water waves and tidal dynamics, probabilistic and simulation models. 3 hours discussion. (005592)
1 course selected from:
SUBJ NUM Title Sustainable Units Semester Offered Course Flags
MATH 472 Introduction to Chaotic Dynamical Systems 3.0 F1
Prerequisites: MATH 260. Recommended: MATH 235, MATH 360.
An introduction to the study of non-linear dynamical systems. Both discrete and continuous systems will be studied using classical analysis combined with geometric techniques and computer simulation.
Areas of application include fractal geometry, coding theory, fluid turbulence, population fluctuation, and chaotic vibrations of structures and circuits. 3 hours discussion. (005588)
MATH 475 Calculus of Variations 3.0 F2
Prerequisites: MATH 260; MATH 361 is recommended.
Classical problems in the calculus of variations. Euler-Lagrange equations. Isoperimetric problems, Fermat's principle. Lagrangian and Hamiltonian mechanics of particles. Two independent variables.
Applications to physics and engineering. 3 hours discussion. (005590)
The Option in Mathematics Education: 25-27 units
The following program, together with the major core program, fulfills all requirements for the Single Subject Matter Preparation Program in Mathematics.
7 courses required:
SUBJ NUM Title Sustainable Units Semester Offered Course Flags
MATH 305 Conceptual and Practical Statistics 3.0 SP
Prerequisites: MATH 120 or MATH 109 (may be taken concurrently).
Design of statistical experiments, graphing, sampling techniques, probability, and common probability distributions will be discussed, with an emphasis on practical applications. Uses and misuses of
statistics, misrepresentation of data, and proper and improper statistical analyses will be discussed. 3 hours discussion. (005532)
MATH 333 History of Mathematics 3.0 SP
Prerequisites: MATH 121; MATH 220 or MATH 225; and at least one upper division mathematics course. MATH 330 is recommended.
Study of the historical development of mathematics, with particular emphasis on the relationship between mathematics and society. 3 hours discussion. (005531)
MATH 337 Introduction to the Theory of Numbers 3.0 FA
Prerequisites: MATH 121, MATH 330.
Basic properties of the integers, division algorithm, fundamental theorem of arithmetic, number-theoretic functions, Diophantine equations, congruences, quadratic residues, continued fractions. 3
hours discussion. (005585)
MATH 341 Mathematical Topics for the Credential 3.0 FA
Prerequisites: MATH 121 or MATH 225.
This course is designed to supplement the mathematical background of the candidate for the single subject credential in mathematics. The mathematical topics will be discussed from the student's and
the teacher's points of view to aid the candidate in making the transition to secondary school mathematics. Topics include mathematical problem-solving, conceptual ideas using algebra, geometry, and
functions, incorporating technology into the mathematics curriculum, and finite systems. 3 hours seminar. (005544)
MATH 342 Math Topics for the Credential 3.0 SP
Prerequisites: MATH 341.
This course focuses on having students examine mathematical pedagogy and the understanding and evaluations of students as mathematical learners as it analyzes secondary mathematics curriculum from an
advanced standpoint. Students will have opportunities to be involved in the facilitation of mathematical learning. Topics include: history of mathematics education, contemporary mathematics
curricula, problem solving, mathematical reasoning and methods of proof, mathematical learning theories, communication, assessment and collaborative learning communities. 3 hours discussion. (005545)
MATH 346 College Geometry 3.0 SP
Prerequisites: MATH 220 or MATH 225; MATH 330.
An exploration of axioms and models for Euclidean and non-Euclidean geometries focusing on the independence of the Parallel Postulate. Additional topics will be chosen from Euclidean plane geometry,
transformation geometry, and the geometry of polyhedra. 3 hours discussion. (005561)
MATH 449 Modern Algebra 3.0 FA
Prerequisites: MATH 220, MATH 235, MATH 330.
Introduction to basic algebraic structures such as groups, ring, and fields. The fundamental concepts of homomorphism, subgroup, normal subgroup and factor group of a group as well as subring, ideal
and factor ring of a ring; permutation groups and matrix groups. 3 hours discussion. (005582)
1 course selected from:
SUBJ NUM Title Sustainable Units Semester Offered Course Flags
MATH 195 Project MATH Seminar Year 1 1.0 FS
The Project M.A.T.H. Seminar - Year 1 is a biweekly seminar for students in their first year of Project M.A.T.H., an innovative program for students interested in becoming secondary mathematics
teachers. Students work with mentor teachers, prepare and present lessons, and participate in a structured early field experience. Completion of the seminar series satisfies the Credential Program's
Early Field Experience requirement. 1 hour seminar. You may take this course more than once for a maximum of 2.0 units. Credit/no credit grading. (020431)
MATH 241 Secondary Math Early Field Experience 1.0 FS
This seminar and the associated CAVE field experience give prospective teachers early exposure to issues relevant to the profession of teaching secondary mathematics. In particular, the experience
helps these future teachers develop a deeper understanding of the K-12 mathematics curriculum, understand connections between their university subject matter preparation and K-12 academic content,
and reflect on developmental and social factors that affect K-12 students' learning of mathematics. 1 hour seminar. You may take this course more than once for a maximum of 4.0 units. Credit/no
credit grading. (020432)
3-5 units selected from:
Note: If MATH 441 is chosen, an additional unit of MATH 241 or MATH 295 must be taken.
SUBJ NUM Title Sustainable Units Semester Offered Course Flags
MATH 442 Mathematics and the Teaching of Mathematics 3.0 FA
Prerequisites: MATH 342.
Completes a three course series, started with two semesters of Mathematics for the Credential, MATH 341 and MATH 342. Students compare instructional strategies and explore the role content and
pedagogical content knowledge has in these strategies. Central to the class is a lesson study project which entails a cycle of lesson development, implementation, reflection and revision, and
implementation again. Students concurrently enrolled in EDTE 535A, Teaching Practicum I for Blended Math Candidates, are able to implement their lesson as part of the practicum, and have a real
context for the full content of the course. 3 hours lecture. (020978)
Or the following group of courses may be selected:
SUBJ NUM Title Sustainable Units Semester Offered Course Flags
MATH 295 Project MATH Seminar Year 2 1.0 FS
Prerequisite: MATH 195.
The Project M.A.T.H. Seminar - Year 2 is the continuation of a biweekly seminar for students in Project M.A.T.H., an innovative program for students interested in becoming secondary mathematics
teachers. Students work with mentor teachers, prepare and present lessons, and participate in a structured early field experience. They also take on a leadership role in the seminar. Completion of
the seminar series satisfies the Credential Program's Early Field Experience requirement. 1 hour seminar. You may take this course more than once for a maximum of 2.0 units. Credit/no credit grading.
MATH 441 Math Topics for the Credential 4.0 FS
Prerequisites: MATH 342.
Corequisites: Assignment as a Mathematics Department intern.
Supervised internship in teaching mathematics with accompanying seminar. Guidance in facilitation of mathematical learning. Topics include contemporary mathematics curriculum topics, mathematical
learning theories, communication, and assessment. 3 hours seminar, 3 hours supervision. You may take this course more than once for a maximum of 8.0 units. Credit/no credit grading. (005546)
Or the following group of courses may be selected:
SUBJ NUM Title Sustainable Units Semester Offered Course Flags
MATH 241 Secondary Math Early Field Experience 1.0 FS
This seminar and the associated CAVE field experience give prospective teachers early exposure to issues relevant to the profession of teaching secondary mathematics. In particular, the experience
helps these future teachers develop a deeper understanding of the K-12 mathematics curriculum, understand connections between their university subject matter preparation and K-12 academic content,
and reflect on developmental and social factors that affect K-12 students' learning of mathematics. 1 hour seminar. You may take this course more than once for a maximum of 4.0 units. Credit/no
credit grading. (020432)
MATH 441 Math Topics for the Credential 4.0 FS
Prerequisites: MATH 342.
Corequisites: Assignment as a Mathematics Department intern.
Supervised internship in teaching mathematics with accompanying seminar. Guidance in facilitation of mathematical learning. Topics include contemporary mathematics curriculum topics, mathematical
learning theories, communication, and assessment. 3 hours seminar, 3 hours supervision. You may take this course more than once for a maximum of 8.0 units. Credit/no credit grading. (005546)
Subject matter preparation requirements are governed by federal and state legislative action and approval of the California Commission on Teacher Credentialing. Requirements may change between
catalogs. Please consult with your departmental credential advisor for current information.
The Option in Mathematics Education - Credential Path: 73-75 units
The following program, together with the major core program, fulfills all requirements for both a degree in Mathematics (Mathematics Education Option) and the Single Subject Credential in
7 courses required:
SUBJ NUM Title Sustainable Units Semester Offered Course Flags
MATH 305 Conceptual and Practical Statistics 3.0 SP
Prerequisites: MATH 120 or MATH 109 (may be taken concurrently).
Design of statistical experiments, graphing, sampling techniques, probability, and common probability distributions will be discussed, with an emphasis on practical applications. Uses and misuses of
statistics, misrepresentation of data, and proper and improper statistical analyses will be discussed. 3 hours discussion. (005532)
MATH 333 History of Mathematics 3.0 SP
Prerequisites: MATH 121; MATH 220 or MATH 225; and at least one upper division mathematics course. MATH 330 is recommended.
Study of the historical development of mathematics, with particular emphasis on the relationship between mathematics and society. 3 hours discussion. (005531)
MATH 337 Introduction to the Theory of Numbers 3.0 FA
Prerequisites: MATH 121, MATH 330.
Basic properties of the integers, division algorithm, fundamental theorem of arithmetic, number-theoretic functions, Diophantine equations, congruences, quadratic residues, continued fractions. 3
hours discussion. (005585)
MATH 341 Mathematical Topics for the Credential 3.0 FA
Prerequisites: MATH 121 or MATH 225.
This course is designed to supplement the mathematical background of the candidate for the single subject credential in mathematics. The mathematical topics will be discussed from the student's and
the teacher's points of view to aid the candidate in making the transition to secondary school mathematics. Topics include mathematical problem-solving, conceptual ideas using algebra, geometry, and
functions, incorporating technology into the mathematics curriculum, and finite systems. 3 hours seminar. (005544)
MATH 342 Math Topics for the Credential 3.0 SP
Prerequisites: MATH 341.
This course focuses on having students examine mathematical pedagogy and the understanding and evaluations of students as mathematical learners as it analyzes secondary mathematics curriculum from an
advanced standpoint. Students will have opportunities to be involved in the facilitation of mathematical learning. Topics include: history of mathematics education, contemporary mathematics
curricula, problem solving, mathematical reasoning and methods of proof, mathematical learning theories, communication, assessment and collaborative learning communities. 3 hours discussion. (005545)
MATH 346 College Geometry 3.0 SP
Prerequisites: MATH 220 or MATH 225; MATH 330.
An exploration of axioms and models for Euclidean and non-Euclidean geometries focusing on the independence of the Parallel Postulate. Additional topics will be chosen from Euclidean plane geometry,
transformation geometry, and the geometry of polyhedra. 3 hours discussion. (005561)
MATH 449 Modern Algebra 3.0 FA
Prerequisites: MATH 220, MATH 235, MATH 330.
Introduction to basic algebraic structures such as groups, ring, and fields. The fundamental concepts of homomorphism, subgroup, normal subgroup and factor group of a group as well as subring, ideal
and factor ring of a ring; permutation groups and matrix groups. 3 hours discussion. (005582)
2 units selected from:
SUBJ NUM Title Sustainable Units Semester Offered Course Flags
MATH 195 Project MATH Seminar Year 1 1.0 FS
The Project M.A.T.H. Seminar - Year 1 is a biweekly seminar for students in their first year of Project M.A.T.H., an innovative program for students interested in becoming secondary mathematics
teachers. Students work with mentor teachers, prepare and present lessons, and participate in a structured early field experience. Completion of the seminar series satisfies the Credential Program's
Early Field Experience requirement. 1 hour seminar. You may take this course more than once for a maximum of 2.0 units. Credit/no credit grading. (020431)
MATH 241 Secondary Math Early Field Experience 1.0 FS
This seminar and the associated CAVE field experience give prospective teachers early exposure to issues relevant to the profession of teaching secondary mathematics. In particular, the experience
helps these future teachers develop a deeper understanding of the K-12 mathematics curriculum, understand connections between their university subject matter preparation and K-12 academic content,
and reflect on developmental and social factors that affect K-12 students' learning of mathematics. 1 hour seminar. You may take this course more than once for a maximum of 4.0 units. Credit/no
credit grading. (020432)
9 courses required:
SUBJ NUM Title Sustainable Units Semester Offered Course Flags
EDTE 302 Access and Equity in Education 3.0 FS
Prospective teachers examine socio-political issues of education relative to current demographics of California schools, integration of multicultural education, and promotion of social justice.
Candidates identify, analyze, and minimize personal and institutional bias and explore the complexities of living and teaching in a pluralistic, multicultural society. Candidates identify barriers
English Learners experience in becoming full participants in the school program and strategies for assisting students in overcoming these barriers. 3 hours lecture. (002977)
EDTE 530 Fundamentals of Teaching Practice for Secondary Teachers 3.0 FS
Teaching is an intellectual challenge that involves planning, facilitating, and reflecting on the process of student learning. Teacher candidates develop strategies necessary to create safe and
structured learning environments and explore relationships among curriculum, instruction, assessment, and classroom climate to meet the needs of a diverse student population within a democratic
society. This is a Single Subject Program course and is not applicable to a master's degree. 3 hours seminar. ABC/no credit grading. (002935)
EDTE 532 Literacy Development 3.0 FS
This course examines issues of language and literacy development for first and second language learners with an emphasis on the adolescent learner. Theory and research on the effects of prior
knowledge, motivation, and culture on reading and writing are addressed. Specific reading, writing, speaking, and listening strategies to support comprehension of academic content by diverse student
populations are emphasized. Assessment techniques specific to literacy development are explored. The central theme of the course is helping students (grades 7-12) become strategic readers and
critical consumers of information in a democratic society. 3 hours seminar. (002902)
EDTE 534 Teaching Special Populations 2.0 FS
This course focuses on legal mandates and practical instructional strategies for general education instructors working with the exceptional student. Content includes the general education teachers'
obligations under IDEA and ADA, the nature and range of exceptional students, models within schools for supporting special populations and selection of appropriate instructional materials and
teaching strategies. The course addresses teachers' attitudes toward inclusion and emphasizes the development of a positive climate of instruction for all special populations in the general
classroom. This is a Single Subject Program course and is not applicable to a master's degree. 2 hours lecture. ABC/no credit grading. (002938)
EDTE 535A Teaching Practicum I for Blended Mathematics Candidates 3.0 FS
This is the first of two teaching practica designed for mathematics teachers. It provides a developmental sequence of carefully planned substantive, supervised field experiences in the 7-12
classroom, including opportunities to observe and apply mathematics-specific pedagogy and democratic practices. This course is a Single Subject Program course and is not applicable to a master's
degree program. 9 hours supervision. Credit/no credit grading. (020985)
EDTE 536 Subject Area Pedagogy II 3.0 FS
This course increases the candidates' awareness and understanding of issues, trends, challenges, and democratic practices of their selected areas of specialization. Teacher candidates advance their
knowledge and skills in teaching academic content standards-based curriculum in the subject area guided by multiple measures of assessing student learning. They make and reflect on instructional
decisions informed by educational theories and research, state-adopted materials and frameworks, and consultations with other professionals. 3 hours lecture. You may take this course more than once
for a maximum of 12.0 units. (002940)
EDTE 537 Applications for Democratic Education 3.0 FS
Prerequisites: Capstone course to be taken in the final semester of the program.
To meet the needs of students in a democratic and diverse society, teachers must be change agents in their school and community. This capstone course advances candidates' knowledge and skills in
developing applications for authentic democratic classroom and school practice. 3 hours lecture. You may take this course more than once for a maximum of 6.0 units. (002941)
EDTE 538 Teaching Practicum II 9.0 FS
Prerequisites: Successful completion of Practicum I (EDTE 535).
This second course in teaching practica continues the sequence of carefully planned substantive, supervised field experiences in the 7-12 grade classroom. Teacher candidate placements are determined
through a collaborative effort of the University and colleagues in cooperating 7-12 grade schools. This is a Single Subject Program course and is not appplicable to a master's degree. 27 hours
supervision. Credit/no credit grading. (002942)
EDTE 580 Educational Psychology 3.0 FS
Prerequisites: Conditional admission to a Professional Education Program.
This course is designed to help candidates understand how students' cognitive, personal-social, and physical development, and cultural and linguistic backgrounds are related to effective teaching and
interpersonal relations in secondary schools. Major segments of instruction include the study of how students learn, remember, and make use of the knowledge they have acquired and how students'
educational growth is assessed in schools. Each candidate begins to use this knowledge to organize and manage a learning environment that supports student development, motivation, and learning. 3
hours seminar. (015899)
Additional Requirements
5 courses required:
SUBJ NUM Title Sustainable Units Semester Offered Course Flags
CMST 131 Speech Communication Fundamentals 3.0 FS GE
Effective oral communication. Introduction to human communication theory. Practice in gathering, organizing, and presenting material in speeches to persuade, inform, and interest. 1 hour lecture, 2
hours discussion. This is an approved General Education course. (002206)
EDTE 451 Health Education for Secondary School Teachers 3.0 FS
Addresses major health issues affecting the adolescent, including, but not limited to, health promotion and disease prevention, nutrition, substance use and abuse, and sexuality. Fulfills the state
health education requirement for a preliminary teaching credential. 3 hours discussion. (004394)
ENGL 471 Intensive Theory and Practice of Second Language Acquisition 3.0 FS
An intensive introduction to the theory and practice of second language acquisition and teaching. 3 hours lecture. (020485)
POLS 155 American Government: National, State, and Local 3.0 SMF GE
An investigation of Who gets What, When, and How in national, state, and local politics. Also includes principles of American governmental institutions, federal systems, congress, president, and
courts. Fulfills California state graduation and credential requirements for the American Constitution. (Satisfies requirement in California Administrative Code, Title 5, Section 40404.) 3 hours
lecture. This is an approved General Education course. (007475)
HIST 130 United States History 3.0 SMF GE
Survey of American history. Development of the United States and its political, economic, social, and cultural institutions. From colonial times to the present. Satisfies requirement in California
Administrative Code, Title 5, Education, Sec. 40404. 3 hours lecture. This is an approved General Education course. (004500)
3-5 units selected from:
Note: If MATH 441 is chosen, an additional unit of MATH 241 or MATH 295 must be taken.
SUBJ NUM Title Sustainable Units Semester Offered Course Flags
MATH 442 Mathematics and the Teaching of Mathematics 3.0 FA
Prerequisites: MATH 342.
Completes a three course series, started with two semesters of Mathematics for the Credential, MATH 341 and MATH 342. Students compare instructional strategies and explore the role content and
pedagogical content knowledge has in these strategies. Central to the class is a lesson study project which entails a cycle of lesson development, implementation, reflection and revision, and
implementation again. Students concurrently enrolled in EDTE 535A, Teaching Practicum I for Blended Math Candidates, are able to implement their lesson as part of the practicum, and have a real
context for the full content of the course. 3 hours lecture. (020978)
Or the following group of courses may be selected:
SUBJ NUM Title Sustainable Units Semester Offered Course Flags
MATH 295 Project MATH Seminar Year 2 1.0 FS
Prerequisite: MATH 195.
The Project M.A.T.H. Seminar - Year 2 is the continuation of a biweekly seminar for students in Project M.A.T.H., an innovative program for students interested in becoming secondary mathematics
teachers. Students work with mentor teachers, prepare and present lessons, and participate in a structured early field experience. They also take on a leadership role in the seminar. Completion of
the seminar series satisfies the Credential Program's Early Field Experience requirement. 1 hour seminar. You may take this course more than once for a maximum of 2.0 units. Credit/no credit grading.
MATH 441 Math Topics for the Credential 4.0 FS
Prerequisites: MATH 342.
Corequisites: Assignment as a Mathematics Department intern.
Supervised internship in teaching mathematics with accompanying seminar. Guidance in facilitation of mathematical learning. Topics include contemporary mathematics curriculum topics, mathematical
learning theories, communication, and assessment. 3 hours seminar, 3 hours supervision. You may take this course more than once for a maximum of 8.0 units. Credit/no credit grading. (005546)
Or the following group of courses may be selected:
SUBJ NUM Title Sustainable Units Semester Offered Course Flags
MATH 241 Secondary Math Early Field Experience 1.0 FS
This seminar and the associated CAVE field experience give prospective teachers early exposure to issues relevant to the profession of teaching secondary mathematics. In particular, the experience
helps these future teachers develop a deeper understanding of the K-12 mathematics curriculum, understand connections between their university subject matter preparation and K-12 academic content,
and reflect on developmental and social factors that affect K-12 students' learning of mathematics. 1 hour seminar. You may take this course more than once for a maximum of 4.0 units. Credit/no
credit grading. (020432)
MATH 441 Math Topics for the Credential 4.0 FS
Prerequisites: MATH 342.
Corequisites: Assignment as a Mathematics Department intern.
Supervised internship in teaching mathematics with accompanying seminar. Guidance in facilitation of mathematical learning. Topics include contemporary mathematics curriculum topics, mathematical
learning theories, communication, and assessment. 3 hours seminar, 3 hours supervision. You may take this course more than once for a maximum of 8.0 units. Credit/no credit grading. (005546)
Note: A Major Academic Plan (MAP) is available for this option so students can complete it in four years. Please request a plan from your major advisor or view it at Degree MAPs . It is important to
follow this plan carefully as there are several GE substitutions that apply only if the entire program is completed.
The Option in Statistics: 25-26 units
6 courses required:
SUBJ NUM Title Sustainable Units Semester Offered Course Flags
MATH 260 Elementary Differential Equations 4.0 FS
Prerequisites: MATH 121.
First order separable, linear, and exact equations; second order linear equations, Laplace transforms, series solutions at an ordinary point, systems of first order linear equations, and
applications. 4 hours discussion. (005509)
MATH 350 Introduction to Probability and Statistics 3.0 FA
Prerequisites: MATH 121.
Basic concepts of probability theory, random variables and their distributions, limit theorems, sampling theory, topics in statistical inference, regression, and correlation. 3 hours discussion. (
MATH 351 Introduction to Probability and Statistics 3.0 SP
Prerequisites: MATH 350.
Continuation of MATH 350. 3 hours discussion. (005535)
MATH 450 Mathematical Statistics 3.0 FA
Prerequisites: MATH 220, MATH 330, MATH 351.
A rigorous theoretical treatment of the following topics: transformations of random variables, estimation, Neyman-Pearson hypothesis testing, likelihood ratio tests, and Bayesian statistics. 3 hours
discussion. (005562)
MATH 456 Applied Statistical Methods II 3.0 S2
Prerequisites: MATH 314 or MATH 315.
Advanced topics in applied statistics including multiple and logistic regression, multivariate methods, multi-level modeling, repeated measures, and others as appropriate. The statistical programming
language R is used. Appropriate for biology, agriculture, nutrition, business, psychology, social science and other majors. 3 hours discussion. (005570)
MATH 458 Sampling Methods 3.0 S1
Prerequisite: MATH 314, MATH 315, or MATH 351 (may be taken concurrently).
The theory and application of survey sampling techniques. Topics include simple random sampling, stratified sampling, systematic sampling, and cluster sampling. Appropriate for mathematics, computer
science, psychology, social science, agriculture, biology, and other majors. 3 hours discussion. (005573)
1 course selected from:
SUBJ NUM Title Sustainable Units Semester Offered Course Flags
MATH 314 Probability and Statistics for Science and Technology 4.0 FS
Prerequisite: MATH 121; and one of the following: CINS 110, CSCI 111, MATH 130 (may be taken concurrently), or MATH 230.
Basic concepts of probability and statistics with emphasis on models used in science and technology. Probability models for statistical estimation and hypothesis testing. Confidence limits. One- and
two-sample inference, simple regression, one- and two-way analysis of variance. Credit cannot be received for both MATH 314 and MATH 315. 4 hours discussion. (005533)
MATH 315 Applied Statistical Methods I 3.0 FS
Prerequisite: MATH 105, MATH 109, or MATH 120, or faculty permission.
Single and two sample inference, analysis of variance, multiple regression, analysis of co-variance, experimental design, repeated measures, nonparametric procedures, and categorical data analysis.
Examples are drawn from biology and related disciplines. The statistical programming language R is used. Appropriate for biology, agriculture, nutrition, psychology, social science and other majors.
3 hours discussion. (005568)
3 units selected from:
Any upper-division mathematics (MATH) courses except MATH 310, MATH 311, MATH 341, MATH 342, and MATH 441.
The Option in Foundational Mathematics Education: 22 units
The following program, together with the major core program, fulfills all requirements for the Foundational Subject Matter Preparation Program in Mathematics.
7 courses required:
SUBJ NUM Title Sustainable Units Semester Offered Course Flags
MATH 305 Conceptual and Practical Statistics 3.0 SP
Prerequisites: MATH 120 or MATH 109 (may be taken concurrently).
Design of statistical experiments, graphing, sampling techniques, probability, and common probability distributions will be discussed, with an emphasis on practical applications. Uses and misuses of
statistics, misrepresentation of data, and proper and improper statistical analyses will be discussed. 3 hours discussion. (005532)
MATH 310 Patterns and Structures in Mathematics 3.0 FS
Prerequisites: MATH 110; MATH 210 or MATH 225.
Builds upon student's understanding of numbers and operations to develop their algebraic and proportional reasoning. Probability viewed as an application of proportional reasoning. Foundational
statistics is also covered. Overall focus on developing a deep understanding of mathematics that is relevant to the teaching of Kindergarten-8th grade. Not acceptable for a mathematics major or minor
except the Foundational Math Education option and Math Education minor. 3 hours discussion. (005542)
MATH 333 History of Mathematics 3.0 SP
Prerequisites: MATH 121; MATH 220 or MATH 225; and at least one upper division mathematics course. MATH 330 is recommended.
Study of the historical development of mathematics, with particular emphasis on the relationship between mathematics and society. 3 hours discussion. (005531)
MATH 341 Mathematical Topics for the Credential 3.0 FA
Prerequisites: MATH 121 or MATH 225.
This course is designed to supplement the mathematical background of the candidate for the single subject credential in mathematics. The mathematical topics will be discussed from the student's and
the teacher's points of view to aid the candidate in making the transition to secondary school mathematics. Topics include mathematical problem-solving, conceptual ideas using algebra, geometry, and
functions, incorporating technology into the mathematics curriculum, and finite systems. 3 hours seminar. (005544)
MATH 342 Math Topics for the Credential 3.0 SP
Prerequisites: MATH 341.
This course focuses on having students examine mathematical pedagogy and the understanding and evaluations of students as mathematical learners as it analyzes secondary mathematics curriculum from an
advanced standpoint. Students will have opportunities to be involved in the facilitation of mathematical learning. Topics include: history of mathematics education, contemporary mathematics
curricula, problem solving, mathematical reasoning and methods of proof, mathematical learning theories, communication, assessment and collaborative learning communities. 3 hours discussion. (005545)
MATH 346 College Geometry 3.0 SP
Prerequisites: MATH 220 or MATH 225; MATH 330.
An exploration of axioms and models for Euclidean and non-Euclidean geometries focusing on the independence of the Parallel Postulate. Additional topics will be chosen from Euclidean plane geometry,
transformation geometry, and the geometry of polyhedra. 3 hours discussion. (005561)
MATH 442 Mathematics and the Teaching of Mathematics 3.0 FA
Prerequisites: MATH 342.
Completes a three course series, started with two semesters of Mathematics for the Credential, MATH 341 and MATH 342. Students compare instructional strategies and explore the role content and
pedagogical content knowledge has in these strategies. Central to the class is a lesson study project which entails a cycle of lesson development, implementation, reflection and revision, and
implementation again. Students concurrently enrolled in EDTE 535A, Teaching Practicum I for Blended Math Candidates, are able to implement their lesson as part of the practicum, and have a real
context for the full content of the course. 3 hours lecture. (020978)
1 course selected from:
SUBJ NUM Title Sustainable Units Semester Offered Course Flags
MATH 195 Project MATH Seminar Year 1 1.0 FS
The Project M.A.T.H. Seminar - Year 1 is a biweekly seminar for students in their first year of Project M.A.T.H., an innovative program for students interested in becoming secondary mathematics
teachers. Students work with mentor teachers, prepare and present lessons, and participate in a structured early field experience. Completion of the seminar series satisfies the Credential Program's
Early Field Experience requirement. 1 hour seminar. You may take this course more than once for a maximum of 2.0 units. Credit/no credit grading. (020431)
MATH 241 Secondary Math Early Field Experience 1.0 FS
This seminar and the associated CAVE field experience give prospective teachers early exposure to issues relevant to the profession of teaching secondary mathematics. In particular, the experience
helps these future teachers develop a deeper understanding of the K-12 mathematics curriculum, understand connections between their university subject matter preparation and K-12 academic content,
and reflect on developmental and social factors that affect K-12 students' learning of mathematics. 1 hour seminar. You may take this course more than once for a maximum of 4.0 units. Credit/no
credit grading. (020432)
Electives Requirement:
To complete the total units required for the bachelor's degree, select additional elective courses from the total University offerings. You should consult with an advisor regarding the selection of
courses which will provide breadth to your University experience and possibly apply to a supportive second major or minor.
Advising Requirement:
Advising is mandatory for all majors in this degree program. Consult your undergraduate advisor for specific information.
A student may complete more than one option in the major. Only courses specifically required by both options may be double counted.
Honors in the Major:
Honors in the Major is a program of independent work in your major. It requires 6 units of honors course work completed over two semesters.
The Honors in the Major program allows you to work closely with a faculty mentor in your area of interest on an original performance or research project. This year-long collaboration allows you to
work in your field at a professional level and culminates in a public presentation of your work. Students sometimes take their projects beyond the University for submission in professional journals,
presentation at conferences, or academic competition. Such experience is valuable for graduate school and professional life. Your honors work will be recognized at your graduation, on your permanent
transcripts, and on your diploma. It is often accompanied by letters of commendation from your mentor in the department or the department chair.
Some common features of Honors in the Major program are:
• You must take 6 units of Honors in the Major course work. All 6 units are honors classes (marked by a suffix of H), and at least 3 of these units are independent study (399H, 499H, 599H) as
specified by your department. You must complete each class with a minimum grade of B.
• You must have completed 9 units of upper-division course work or 21 overall units in your major before you can be admitted to Honors in the Major. Check the requirements for your major carefully,
as there may be specific courses that must be included in these units.
• Your cumulative GPA should be at least 3.5 or within the top 5% of majors in your department.
• Your GPA in your major should be at least 3.5 or within the top 5% of majors in your department.
• Most students apply for or are invited to participate in Honors in the Major during the second semester of their junior year. Then they complete the 6 units of course work over the two semesters
of their senior year.
• Your honors work culminates with a public presentation of your honors project.
While Honors in the Major is part of the Honors Program, each department administers its own program. Please contact your major department or major advisor to apply.
Honors in Mathematics
Well-qualified Mathematics majors are encouraged to apply for Honors in Mathematics. The program is open to junior and senior Mathematics majors who have completed 9 upper-division units (or a total
of 24 units) in mathematics, including MATH 420W with a grade of B or better, and have a grade point average among the top 5% of junior-senior mathematics majors. Please visit the department office
in HOLT 181 for further information.
|
{"url":"https://catalog-archive.csuchico.edu/viewer/19/MATH/MATHNONEUN.html","timestamp":"2024-11-04T21:59:06Z","content_type":"application/xhtml+xml","content_length":"166446","record_id":"<urn:uuid:c417ce94-3baf-498d-9b0f-e14813e773b4>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00007.warc.gz"}
|
Douglas College Physics 1104 Custom Textbook – Winter and Summer 2020
Chapter 2 One-Dimensional Kinematics
• Define position, displacement, distance, and distance traveled.
• Explain the relationship between position and displacement.
• Distinguish between displacement and distance traveled.
• Calculate displacement and distance given initial position, final position and the path between the two.
Figure 1. These cyclists in Vietnam can be described by their position relative to buildings and a canal. Their motion can be described by their change in position, or displacement, in the frame of
reference. (credit: Suzan Black, Fotopedia).
In order to describe the motion of an object, you must first be able to describe its position—where it is at any particular time. More precisely, you need to specify its position relative to a
convenient reference frame. Earth is often used as a reference frame, and we often describe the position of an object as it relates to stationary objects in that reference frame. For example, a
rocket launch would be described in terms of the position of the rocket with respect to the Earth as a whole, while a professor’s position could be described in terms of where she is in relation to
the nearby white board. In other cases, we use reference frames that are not stationary but are in motion relative to the Earth. To describe the position of a person in an airplane, for example, we
use the airplane, not the Earth, as the reference frame. Please see the figures below.
If an object moves relative to a reference frame (for example, if a professor moves to the right relative to a white board or a passenger moves toward the rear of an airplane), then the object’s
position changes. This change in position is known as displacement. The word “displacement” implies that an object has moved, or has been displaced.
Displacement is the change in position of an object:
Δx =x[f ] – x[0]
where Δx is displacement, x[f] is the final position, and x[0] is the initial position.
In this text the upper case Greek letter Δ (delta) always means “change in” whatever quantity follows it; thus, Δx means change in position. Always solve for displacement by subtracting initial
position x[0] from final x[f].
Note that the SI unit for displacement is the meter (m) (see From Chapter 1 Physical Quantities and Units), but sometimes kilometers, miles, feet, and other units of length are used. Keep in mind
that when units other than the meter are used in a problem, you may need to convert them into meters to complete the calculation.
Figure 2. A professor paces left and right while lecturing. Her position relative to Earth is given by x. The +2.0 m displacement of the professor relative to Earth is represented by an arrow
pointing to the right.
Figure 3. A passenger moves from his seat to the back of the plane. His location relative to the airplane is given by x. The -4.0-m displacement of the passenger relative to the plane is represented
by an arrow toward the rear of the plane. Notice that the arrow representing his displacement is twice as long as the arrow representing the displacement of the professor shown above (he moves twice
as far).
Note that displacement has a direction as well as a magnitude. The professor’s displacement is 2.0 m to the right, and the airline passenger’s displacement is 4.0 m toward the rear. In
one-dimensional motion, direction can be specified with a plus or minus sign. When you begin a problem, you should select which direction is positive (usually that will be to the right or up, but you
are free to select positive as being any direction). The professor’s initial position is x[0] = 1.5 m and her final position is x[f] = 3.5 m. Thus her displacement is
Δx = x [final] -x [initial] = x[f] – x[o] = 3.5 m -1.5 m = +2.0 m
In this coordinate system, motion to the right is positive, whereas motion to the left is negative. Similarly, the airplane passenger’s initial position is x[0] = 6.0 m and his final position is x[f]
= 2.0 m, so his displacement is
Δx = x [final] -x [initial] = x[f] – x[o] = 2.0 m – 6.0 m = -4.0 m
His displacement is negative because his motion is toward the rear of the plane, or in the negative x direction in our coordinate system.
Although displacement is described in terms of direction, distance is not. Distance is defined to be the magnitude or size of displacement between two positions. Note that the distance between two
positions is not the same as the distance traveled between them. Distance traveled is the total length of the path traveled between two positions. Distance has no direction and, thus, no sign. For
example, the distance the professor walks is 2.0 m. The distance the airplane passenger walks is 4.0 m.
It is important to note that the distance traveled, however, can be greater than the magnitude of the displacement (by magnitude, we mean just the size of the displacement without regard to its
direction; that is, just a number with a unit). For example, the professor could pace back and forth many times, perhaps walking a distance of 150 m during a lecture, yet still end up only 2.0 m to
the right of her starting point. In this case her displacement would be +2.0 m, the magnitude of her displacement would be 2.0 m, but the distance she traveled would be 150 m. In kinematics we nearly
always deal with displacement and magnitude of displacement, and almost never with distance traveled. One way to think about this is to assume you marked the start of the motion and the end of the
motion. The displacement is simply the difference in the position of the two marks and is independent of the path taken in traveling between the two marks. The distance traveled, however, is the
total length of the path taken between the two marks.
Check Your Understanding 1
1: A cyclist rides 3 km west and then turns around and rides 2 km east. (a) What is her displacement? (b) What distance does she ride? (c) What is the magnitude of her displacement?
Section Summary
• Kinematics is the study of motion without considering its causes. In this chapter, it is limited to motion along a straight line, called one-dimensional motion.
• Displacement is the change in position of an object.
• In symbols, displacement Δx is defined to be
Δx = x [final] -x [initial] = x[f] – x[o]
where x[0] is the initial position and x[f] is the final position. In this text, the Greek letter Δ (delta) always means “change in” what ever quantity follows it. The SI unit for displacement is the
meter (m). Displacement has a direction as well as a magnitude.
• When you start a problem, assign which direction will be positive.
• Distance is the magnitude of displacement between two positions.
• Distance traveled is the total length of the path traveled between two positions.
Conceptual Questions
1: Give an example in which there are clear distinctions among distance traveled, displacement, and magnitude of displacement. Specifically identify each quantity in your example.
2: Under what circumstances does distance traveled equal magnitude of displacement? What is the only case in which magnitude of displacement and displacement are exactly the same?
3: Bacteria move back and forth by using their flagella (structures that look like little tails). Speeds of up to 50 μm/s (50 × 10^-6 m/s) have been observed. The total distance traveled by a
bacterium is large for its size, while its displacement is small. Why is this?
Problems & Exercises
2: Find the following for path B: (a) The distance traveled. (b) The magnitude of the displacement from start to finish. (c) The displacement from start to finish.
3: Find the following for path C: (a) The distance traveled. (b) The magnitude of the displacement from start to finish. (c) The displacement from start to finish.
4: Find the following for path D: (a) The distance traveled. (b) The magnitude of the displacement from start to finish. (c) The displacement from start to finish.
the study of motion without considering its causes
the location of an object at a particular time
the change in position of an object
the magnitude of displacement between two positions
distance traveled
the total length of the path traveled between two positions
Check Your Understanding 1
Figure 5.
1: (a) The rider’s displacement is Δx = x [final] -x [initial] = x[f] – x[o]. The displacement is negative because we take east to be positive and west to be negative. Or you could just say “1 km to
the West”. Note that the drawing clearly showed that West was chosen to be negative. (b) The distance traveled is 3 km + 2 km = 5 km. (c) The magnitude of the displacement is 1 km.
Problems & Exercises
1: (a) 7 m (b) 7 m (c) + 7 m
2: (a) 5 m (b) 5 m (c) – 5 m
3: This is badly drawn so the answers are debatable. Assuming it went from a position of 2 m to 10 then back to 8 and then back again to 10 m that gives a) distance of 12 m b) magnitude of the
displacement as 8 m and c) a displacement of +8 m or 8 metres ot the right.
4: (a) 8 m (b) 4 m (c) – 4 m
|
{"url":"https://pressbooks.bccampus.ca/practicalphysicsphys1104/chapter/2-1-displacement/","timestamp":"2024-11-14T11:41:32Z","content_type":"text/html","content_length":"165253","record_id":"<urn:uuid:bd3e7466-ad9c-49ff-9539-397c03b790bd>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00210.warc.gz"}
|
An ideal gas has a volume of 2.28 L at 279 K and 1.07 atm. What is the pressure when the volume is 1.03 L and the temperature is 307 K? | Socratic
An ideal gas has a volume of 2.28 L at 279 K and 1.07 atm. What is the pressure when the volume is 1.03 L and the temperature is 307 K?
1 Answer
$\text{p"= 2.61" atm to 3 significant figures}$
First, we use the first set of data to calculate the number of moles using the ideal gas equation:
$\text{pV " = " nRT}$
• $\text{p}$ is pressure in pascals ($\text{Pa}$)
• $\text{V}$ is volume in cubic metres ( ${\text{m}}^{3}$)
• $\text{n}$ is the number of moles
• $\text{R}$ is the gas constant = $8.314$
• $\text{T}$ is the temperature in Kelvin ($\text{K}$)
First, convert your given values into workable units:
• $1 {\text{L" = 0.001"m"^3, :. 2.28"L" = 0.00228"m}}^{3}$
• $1 \text{atm" = 101325"Pa", :. 1.07"atm" = 108417.8"Pa}$
Second, rearrange the equation to solve for moles:
$\text{n "=" ""pV"/"RT}$
Next, substitute in your given values and calculate the number of moles:
#"n "=" "(108417.8"Pa" * 0.00228"m"^3)/(8.314 * 279"K")#
$\text{n "=" "0.1065color(red)(666)255" moles}$
We can then move onto calculating the new pressure value. The first thing to do here is to, again, convert non-compliant units into ones that are accepted by the equation:
• $1 {\text{L" = 0.001"m"^3, :. 1.03"L" = 0.00103"m}}^{3}$
Then we rearrange the equation to solve for pressure:
$\text{p "=" ""nRT"/"V}$
And substituting in our values, we get:
#"p "=(0.1065666255*8.314 * 307"K")/(0.00103"m"^3)#
$\text{p "=264078.0989" Pa" = 2.61" atm to 3 significant figures}$
Impact of this question
2025 views around the world
|
{"url":"https://socratic.org/questions/an-ideal-gas-has-a-volume-of-2-28-l-at-279-k-and-1-07-atm-what-is-the-pressure-w#205419","timestamp":"2024-11-03T09:23:05Z","content_type":"text/html","content_length":"36998","record_id":"<urn:uuid:04cb93d7-1194-41ed-a8ce-bffc4e2c8862>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00635.warc.gz"}
|
What is the gravitational potential energy of a 9 kg object on a shelf 7/4 m high? | Socratic
What is the gravitational potential energy of a # 9 kg# object on a shelf # 7/4 m # high?
1 Answer
Gravitational Potential Energy has a very simple equation:
$P E = m g h$
$P E$ is the Potential Energy, $m$ is mass, $g$ is the acceleration due to gravity, and $h$ is the height above ground.
We know mass, height, and gravitational acceleration, so we can go ahead and calculate the Potential Energy:
$m = 9 k g$
$g = 9.8 \frac{m}{s} ^ 2$
$h = \frac{7}{4} m$
$P E = 9 \times 9.8 \times \frac{7}{4}$
$P E = \frac{63}{4} \times 9.8$
$P E = \frac{617.4}{4}$
Impact of this question
1688 views around the world
|
{"url":"https://socratic.org/questions/what-is-the-gravitational-potential-energy-of-a-9-kg-object-on-a-shelf-7-4-m-hig","timestamp":"2024-11-13T02:16:14Z","content_type":"text/html","content_length":"32895","record_id":"<urn:uuid:455f9122-07d0-411c-8b64-a109c44bfeb3>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00108.warc.gz"}
|
Expected Goals For All
It seems that everybody has their own expected goals models for football nowadays but they all seem to be top secret and all appear to give different results so I thought I post a quick example of
one technique here to try and stimulate a bit of chat about the best way to model them.
The Data
Over the past few weeks I have tediously collected several thousand xy co-ordinates for shot locations from Squawka and converted them into approximate distances from goal in metres, assuming that an
average football pitch is 100m x 65m.
Goals Versus Distance
Figure 1 below shows the relationship between the probability of scoring a goal and how far away from the goal line the shot is taken from.
Figure 1: Shots Versus Distance From Goal
There seems to be a little bit of noise in the data, particularly around the 12-13m mark but overall I was pleasantly surprised how neat the data looks – there seems to be a pretty clear non-linear
relationship between the likelihood of scoring and how far away from the goal the shot is taken from.
So how do we model this relationship? Obviously we cannot just stick a linear regression through the graph it as the relationship is clearly not linear so one possibility is to use a polynomial
instead of a straight line (Figure 2).
Figure 2: Fitting a Polynomial
Unfortunately, this does not give particularly good results as low order polynomials (the orange line) do not fit tightly enough to the non-linearity in the relationship while higher-order
polynomials (the red line) start to fit to the noise in the data leading to problems with over-fitting.
So what do we do now? Well, looking closer the shape of the curve appears exponential so one option is to fit a Power function to it. We can do this pretty easily by taking the log of the data,
fitting a linear regression against it and plotting this against our non-logged data (Figure 3).
Figure 3: Power Curve
This gives an extremely good fit with the data and seems a plausible choice. We know goal scoring is Poisson distributed so it would seem natural to fit expected goals using an exponential shaped
curve since Poisson and exponential distributions are inherently linked – the exponential distribution in fact describes the time taken between individual events occurring in a Poisson process.
If we calculate the r squared value for the fit of the Power curve then we get a value of 0.84, meaning 84% of the variance in goal scoring can be attributed to how far away the player taking the
shot is from the goal. This is pretty impressive as it leaves just 16% attributed to other reasons, such as the angle of the shot, goalkeeper positioning, defensive pressure, the shooting player’s
talent etc.
Before you ask, I’ll be looking at whether adding these additional factors into the model can improve it or whether the added complexity is not worth chasing the 16% for in the coming weeks.
Using the Expected Goals Model
But how do we use the model? Although everybody else’s models seem to be top secret I’m going to give mine away. The coefficient for the regression is $-1.036884$ and the intercept is $0.05950286$.
To put this into action all you need to do is raise the distance away from the goal in metres to the power of the coefficient and multiply by 10 to the power of the intercept. For example, a shot
from 8 metres gives:
$8^{-1.036884} * 10^{0.05950286} = 0.132771$ expected goals
So how about we give it a proper test and try it out on this season’s English Premier League to date? The results are shown in Table 1 and overall give a root mean square error of 8.2 goals, which
seems a pretty reasonable starting point for developing the model further from.
Team Goals expG Residual
1 Man City 68.00 46.90 21.10
2 Liverpool 63.00 42.56 20.44
3 Arsenal 48.00 35.26 12.74
4 Chelsea 48.00 42.56 5.44
5 Man Utd 41.00 35.26 5.74
6 Southampton 37.00 29.05 7.95
7 Everton 37.00 33.31 3.69
8 Newcastle 32.00 30.32 1.68
9 Swansea 32.00 26.89 5.11
10 Tottenham 32.00 31.21 0.79
11 WBA 30.00 31.17 -1.17
12 West Ham 28.00 26.69 1.31
13 Aston Villa 27.00 24.20 2.80
14 Stoke 26.00 25.29 0.71
15 Sunderland 25.00 25.63 -0.63
16 Hull 25.00 23.95 1.05
17 Fulham 24.00 24.39 -0.39
18 Cardiff 19.00 24.67 -5.67
19 Norwich 19.00 27.61 -8.61
20 Crystal Palace 18.00 25.03 -7.03
Table 1: Expected Goals For The English Premier League To Date
You can also see a pretty clear pattern in that the teams at the top of the league have generally over-performed the goal expectancy while those towards the bottom end have under-performed it. This
would seem reasonable as we are predicting average goal expectancy and the top teams are obviously above average so should perhaps do better with their chances, while the lower teams are below
average so would be expected to perform worse?
What Next?
I’m not claiming this to be the only way of calculating expected goals, or even the best way but hopefully it will encourage more discussion of how to calculate expected goals rather than a lot of
secret black boxes all giving different results.
I hope to write more about expected goals over the coming weeks in order to test this equation to see how well it really works, to hopefully improve it further and to try and understand what the
metric can and cannot tell us.
In the meantime, feel free to use my equation to calculate expected goals, all I ask is that you don’t try and pass the equation off as your own (you know who you are!!) and that if you use it then
please acknowledge me and link back to my site.
Be warned though it’s a work in progress so is subject to change as and when I improve things…
Christopher Hoeger - February 13, 2014
Hey, I’m a chemical engineer, so while my knowledge of statistics is limited to it’s application in my field, I would really like to get into football analytics, and I’d love to contribute, if
possible, to your expected goals model. Could you tell me where the best publicly available data is, and, more importantly, the best way to access it? Thank you, Chris
Martin Eastwood - February 13, 2014
Hi Chris,
I took all the data for constructing the model from Squawka. It’s a tedious process but with a bit of patience you can transcribe approximate xy coordinates for shots from their site.
Ali - March 19, 2014
Would love to know the process behind transcribing the shots into x,y coordinates if you have the time..
Claus Moeller - February 16, 2014
Interesting model.
I think the average of the pitches in Premier League, is way bigger than 100m x 65m.
Most of the pitches in Premier League is 105m x 68m – http://www.openplay.co.uk/blog/premiership-football-pitch-sizes-2013-2014/
Just for my curiosity, do you rely on all the data from Squawka?
Martin Eastwood - February 16, 2014
Thanks for the link, not seen that one before. Should be fairly trivial to rescale should people wish to. Yes I used Squawka to get all the xy coordinates.
Hugo Varandas - February 19, 2014
Hi, nice work. I find this conclusion very interesting “This is pretty impressive as it leaves just 16% attributed to other reasons”. Maybe this is the reason why the better teams outperform the
I just do not understand how does this help to predict the expected goals in a particular future game.
Martin Eastwood - February 19, 2014
Yes, presumably somehere in that 16% is player talent, defensive pressure, goalkeeper skill etc. So far my equation is more explanatory rather than predictive. More work would be needed to produce a
predictive model from shots.
Justin - February 20, 2014
I’m sure the spike in probability at the 12m mark is the result of penalties. Analyzing only goals from open play would increase the R^2 I would suspect.
Martin Eastwood - February 20, 2014
Yes, when I next get some free time and look at removing them although the fit of the curve is so good is will probably have minimal effect.
Antony - March 15, 2014
Do you know whether the decay curve exponent is replicable across seasons?
Also do you know how much difference angle of the shot makes and if other “black box”-guys additionally use this? The fit by averaging across angles is good, but wide players may have a detriment
when cast against a benchmark without it – or maybe the difference is small.
Well done for compiling the data from squawka – when I had a quick search the only way to get x-y’s seemed to be by eye and a ruler from their pitch plots!!
Martin Eastwood - March 17, 2014
I’ve not looked season-by-season yet but the decay curve is created from multiple season’s worth of data aggregated together.
The angle presumably makes some difference but considering how high the r-squared is it must only account for a relatively small proportion of expected goals although I will look into this in more
detail as soon as I get some free time.
Benjamin Lindqvist - April 18, 2014
Hey, just thought I’d cryptically point out there’s a better function to fit to the data than that.
Martin Eastwood - April 18, 2014
Ooh you can’t just leave it that, tell me more :-)
Benjamin Lindqvist - April 18, 2014
Well if I’m not mistaken, the probability of scoring from 0 yards is, according to your model, more than 100%. Not even Ronaldo will score more than 100% of the time, not even from the goal line :D
Why don’t you try a decaying exponential instead? That has all the properties you’re looking for, but it will also be well behaved at x=0.
Martin Eastwood - April 18, 2014
Yes the 0 yards issue is a concern. I thought about forcing the curve through 1 but dislike taking that sort of brute force approach. The decaying exponential sounds a really good idea. I’ll look
into that,thanks!
Benjamin Lindqvist - April 18, 2014
I.e.: http://fooplot.com/#W3sidHlwZSI6MCwiZXEiOiIxL3giLCJjb2xvciI6IiMwNEZGMDAifSx7InR5cGUiOjAsImVxIjoiZV4oLXgpIiwiY29sb3IiOiIjRkYwMDAwIn0seyJ0eXBlIjoxMDAwfV0-
Anonymous - April 22, 2014
Hey Martin,
I had a very noobish doubt. Please don’t mind it. Can you explain how you calculated the probability of scoring (Y axis) to arrive at the scatter plot in Figure 1?
Thanks in advance. :)
Anonymous - April 22, 2014
Poisson distribution of course. My bad!
Anonymous - April 22, 2014
If you could run me through the process or maybe send me some useful links, it would be greatly appreciated!
Martin Eastwood - April 22, 2014
I split the shots into different bins by distance and then calculated the probability per bin. I then used this set of aggregated probabilities to construct the model and fit the curve.
Anonymous - April 22, 2014
Thanks a lot! :)
I need to brush up on my stats concepts.
abhinav - July 26, 2014
how are you gathering xy coordinates from squawka? or are you using some other metric to approximate?
Martin Eastwood - July 26, 2014
It took quite a bit of effort!
|
{"url":"http://pena.lt/y/2014/02/12/expected-goals-for-all/","timestamp":"2024-11-08T02:27:19Z","content_type":"text/html","content_length":"28652","record_id":"<urn:uuid:350b0e89-1507-4b3c-88b5-1637f951844c>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00469.warc.gz"}
|
Design of Drainage Downspouts Systems over a Road Embankment
Research Institute of Water and Environmental Engineering, Universitat Politècnica de València, 46022 Valencia, Spain
Flumen Institute, Universitat Politècnica de Catalunya—CIMNE, 08034 Barcelona, Spain
Author to whom correspondence should be addressed.
Submission received: 7 September 2023 / Revised: 3 October 2023 / Accepted: 7 October 2023 / Published: 10 October 2023
Numerous studies have examined the complex relationship between factors like embankment downspout spacing, height, slope, and rainfall characteristics in the quest to find the best spacing for
embankment downspouts. Defining the correct spacing between road drainage elements is of utmost importance in minimizing water flow on roadways. This paper presents a methodology based on numerical
methods for the design of road drainage systems using the Iber model. The objective of the work is to propose a tool and criteria for analyzing the hydraulic behavior of runoff on highways, determine
the appropriate drainage behavior, and apply the methodology in a case study. This case study is based on a straight highway section with slopes up to 5%, according to Spanish road design
regulations. Different dimensions are considered for the chute, drainage channel, collection nozzle, and downspout over the embankment. Tests are carried out to evaluate the separation between
downspouts, the longitudinal slope, and the size of the nozzles. The results show the suitable hydraulic performance of the model, besides providing the absorption capacity of each downspout. The
influence of the nozzle size, the slope, and the width of the causeway on the draughts and velocities is analyzed. The influence of downspout spacing and nozzle type on road drainage design is
determined. In summary, this article presents a methodology and criteria for the design of road drainage systems and shows the results obtained in a case study using the Iber model. The results help
in understanding the influence of different variables on the hydraulic behavior of road runoff and provide relevant information for proper drainage design.
1. Introduction
The impact of adverse weather conditions on traffic demand, traffic safety, and traffic flow is a well-documented phenomenon [
]. Generally, precipitation events, such as rainfall, exert a notable influence on travel dynamics. On average, they result in a reduction in travel speeds ranging from 1.2% to 18.4% and can lead to
a decrease in traffic volume by approximately 1.1% to 16.5% [
]. Consequently, the presence of water on the road surface emerges as a pivotal factor in ensuring traffic safety. It not only affects drivers’ visibility but also predisposes the occurrence of
hydroplaning [
]. The likelihood of hydroplaning is contingent upon several factors, including water depth, roadway geometry, vehicle speed, tread depth, tire inflation pressure, and the overall condition of the
pavement surface [
Various countries have established guidelines to ensure the efficient removal of runoff from road surfaces, aiming to prevent skidding, pooling, and related hazards. Notably, Spanish regulations
govern road surface drainage [
] and delineate roadway protection within a specific cross-section. This protection is defined as the vertical difference in elevation between the lowest point of the roadway and the water level
corresponding to the design flow rate. In accordance with these regulations, the drainage system for both the roadbed and shoulders must facilitate the collection, conveyance, and evacuation of
runoff while adhering to the prescribed cross-sectional profile. That is, roadway protection greater than or equal to 0.05 m. Although the project may justify the adoption of a lower value, the water
level must not reach the hard shoulder.
Within the context of Australian road design guidelines [
], specific criteria are prescribed for geometric road design, with a particular emphasis on drainage considerations. These criteria stipulate that road surface geometry should be configured to limit
the drainage path to a maximum length of 60 m. For road sections where the operational or design speed exceeds 80 km/h, it is recommended to maintain a maximum water depth of 2.5 mm as desirable,
with an absolute limit of 4.0 mm. In all other scenarios, both the desirable and absolute maximum allowable water depth are set at 5.0 mm. These guidelines play a crucial role in ensuring safe and
efficient road design in Australia, addressing concerns related to water accumulation and road geometry.
In the United States, as per guidelines provided by the Federal Highway Administration [
], hydroplaning is acknowledged to potentially occur at speeds as low as 89 km/h when water depth reaches a mere 2 mm. However, the occurrence of hydroplaning is subject to a range of variables,
which can lead to this phenomenon happening at even lower velocities and shallower water depths [
]. Consequently, in critical road sections, the risk of hydroplaning can be effectively mitigated through prudent highway geometry design, which involves reducing the drainage path length for water
flowing over the pavement, thereby preventing the accumulation of water. Implementing such measures involves strategies like enhancing pavement surface texture depth, incorporating open-graded
asphaltic pavements to channel water away from the tire contact area, or deploying drainage structures along the roadway to capture and expeditiously evacuate water flowing over the pavement [
In the pursuit of determining the optimal spacing for embankment downspouts, numerous studies have delved into the intricate interplay between factors such as embankment downspout spacing, embankment
height, slope, and rainfall characteristics (e.g., [
]). Typically, these investigations leverage hydraulic and hydrologic models to simulate runoff patterns over embankments, facilitating the estimation of downspout spacing required to avert spillage
and consequent erosion within the embankment. The findings of these studies offer valuable insights that can inform the design of road surface drainage systems and the development of overarching
guidelines for embankment downspout spacing. Nonetheless, it is imperative to bear in mind that the optimal spacing remains contingent upon a multitude of variables, including localized climatic
conditions and the construction material employed, necessitating case-specific adjustments for optimal results.
The precise definition of inlet spacing between road drainage elements holds paramount significance in the effort to minimize water flow on roadways. Diverse methodologies have been proposed by
various authors in this context. Wong [
] introduced an approach founded on kinematic wave theory for determining road drainage inlet spacing under continuous grade conditions. This method correlates with the permissible maximum flood
width, the physical attributes of the roadway, the empirical relationship between maximum discharge and intercepted flow, as well as the rainfall intensity–duration curve. Meanwhile, Nickow and
Hellman [
] employed a genetic algorithm to present a decision-making framework for cost-effective stormwater inlet design within highway drainage systems. In this context, optimal design is defined as the
most economical combination of inlet types, sizes, and locations that effectively drain a given length of pavement.
Ku et al. [
] presented a computational model designed for the optimal planning of road surface drainage facilities rooted in the varied flow theory. This model is specifically tailored to calculate flow
profiles within linear drainage channels. It estimates the inflow from the road surface into the channel by considering rainfall intensity and road width as key variables. Notably, the inlet spacing
determined through varied flow analysis tends to be greater than that derived from uniform flow analysis. However, this spacing diminishes with increasing slope. Consequently, a larger outlet spacing
corresponds to a reduced number of outlets positioned along a road curb, illustrating the intricate relationship between design parameters and flow dynamics.
Kwak et al. [
] devised an optimal methodology for conducting a comprehensive analysis of road surface runoff, accounting for diverse road conditions to ensure a precise evaluation of the road’s drainage capacity.
Factors taken into consideration encompass road width (set at 6 m) and slopes (ranging from a longitudinal road slope of 2% to 10%, a transverse road slope of 2%, and a transverse gutter slope of 2%
to 7%). The analytical framework utilized essential parameters from the spatially varied flow module, with attention to basin geometry (simplified or modified basin), travel time (in the context of
road surface flow or gutter flow), and the interception efficiency of the grate inlet. This approach enables a more comprehensive and accurate assessment of the road’s ability to manage surface
runoff, accommodating various real-world road scenarios.
Han et al. [
] proposed a prediction model for water film depth (WFD) that hinges on the geometric attributes of road infrastructure and the effectiveness of drainage systems under varying rainfall intensities.
Specifically, pavement WFD on rainy days is defined as the depth of water accumulation during short-duration (1 h) rainfall events. This research centers its attention on WFD at locations like
water-retaining belts and curb stones. The interplay of different road configurations and combinations yields varied water depths, even when water accumulation remains constant. As such, constructing
a WFD-based model entails a multi-faceted approach, encompassing rainfall calculations, the development of a water accumulation model, the determination of drainage facility displacement, and the
derivation of WFD formulas for road surfaces through the application of the Manning equation, tailored to the distinct geometric characteristics of the road.
Li et al. [
] introduced an updated two-dimensional flow simulation program, FullSWOF [
], which constitutes a significant advancement in hydraulic modeling. This model comprehensively solves shallow water equations governing overland flow while incorporating submodules for modeling
infiltration by zones and the interception of flow by grate inlets. In the context of this study, a comprehensive dataset comprising 1000 road-curb inlet modeling cases was established. These cases
spanned a wide array of combinations involving 10 longitudinal slopes ranging from 0.1% to 1%, 10 cross slopes ranging from 1.5% to 6%, and 10 upstream inflow rates ranging from 6 L/s to 24 L/s, all
aimed at determining inlet length. A second set of 1000 modeling cases, sharing the same longitudinal and cross slopes, explored 10 different curb inlet lengths, varying from 0.15 m to 1.5 m, with
the goal of assessing inlet efficiency. Consequently, regression equations for inlet length and inlet efficiency were meticulously developed as functions of the input parameters, offering valuable
insights into the design and optimization of road-curb inlets.
Aranda et al. [
] presented a novel approach grounded in hydraulic numerical simulation, harnessing the Iber model [
], to assess the efficacy of grate inlets. This method is well aligned with the design standards upheld in various countries. The methodology delineated in this study streamlines the process of
conducting sensitivity analyses for diverse scupper configurations. It grants comprehensive control over the hydraulic performance of each individual grate inlet within a range of scenarios. The
availability of detailed hydraulic data serves as the cornerstone for comparative evaluations of different solutions, thereby facilitating informed decision-making processes and ultimately
culminating in the realization of efficiency-centric solutions.
This article introduces a methodology underpinned by numerical methods for the design of road drainage systems, achieved through the solution of 2D shallow water equations utilizing the Iber model.
The primary objectives of this paper encompass the development of a tool and a set of criteria for the systematic analysis of the hydraulic dynamics of runoff on road surfaces, the formulation of a
comprehensive methodology to ascertain the effectiveness and efficiency of road drainage systems, and the practical demonstration of the proposed methodology through an illustrative case study,
offering a real-world application and validation of the approach.
2. Methodology
2.1. Design Hyetograph
Design hyetographs are used in conjunction with unit hydrographs to obtain peak discharge and hydrograph shape for hydraulic design [
]. Their definition is critical in drainage design since it determines the peak flooding volume in a catchment and the corresponding drainage capacity demand for a given return period [
]. Different synthetic unit hydrograph methods available in the hydrologic literature can be found in Bhunya et al. [
] or in Singh et al. [
For the construction of the design hyetograph, the Alternating Blocks Hyetograph Method [
] was selected, which is commonly used as a conservative method because it conducts the highest estimate of peak flows, creating critical flood situations [
2.2. Numerical Tool: Iber
Iber is a two-dimensional (2D) numerical tool for the simulation of free surface flows [
]. Initially originating from the academic realm, it was originally conceived for assessing hydrodynamics and sediment transport processes in river systems [
]. Presently, Iber boasts an integrated suite of calculation modules tailored to simulate a diverse array of environmental fluid phenomena. These encompass hydrological processes [
], pollutant transport [
], large-wood transport [
], dam-break scenarios [
], eco-hydraulics [
], urban drainage processes [
], as well as the release and propagation of snow avalanches [
When utilizing Iber for simulating surface urban drainage problems, the continuity equation’s source terms must account for rainfall intensity, the loss process, and flow sinks resulting from
drainage inlets leading to the sewer system network. These considerations are crucial for the governing equations [
$∂ h ∂ t + ∂ q x ∂ x + ∂ q y ∂ y = R − f − f i ∂ q x ∂ t + ∂ ∂ x q x 2 h + g h 2 2 + ∂ ∂ y q x q y h = g h S o , x − S f , x ∂ q x ∂ t + ∂ ∂ x q x q y h + ∂ ∂ y q y 2 h + g h 2 2 = g h S o , y − S f
, y$
is the water depth,
denote the two components of the specific discharge,
is the gravitational acceleration,
are the two components of the bottom slope, and
are the two components of the friction slope, typically calculated using the Manning formula.
When it comes to hydrological modeling, the variable
considers the impact of rainfall on the overland flow [
], while
represents the rate of distributed hydrological losses, including infiltration, evapotranspiration, interception, and surface retention [
]. The term
$f i$
relates to the distributed losses of surface water as it is incorporated into the drainage system.
Iber solves two-dimensional shallow water equations (Equation (1)) using a conservative scheme based on the Finite Volume Method (FVM). It operates on unstructured, structured, or hybrid meh
configurations, which may comprise triangles and/or quadrilaterals. To handle convective fluxes, Iber employs an explicit first-order Godunov-type upwind scheme [
], specifically the Roe Scheme [
Iber offers a range of functionalities that greatly simplify the process of calculating rainfall–runoff transformation processes. It can be effectively utilized as a distributed hydrological model
based on the principles of 2D shallow water equations [
]. These functionalities include:
• Rainfall Field Definition: Iber allows users to define rainfall fields using data from rain gauges or raster files.
• Rainfall Loss Definition: Users can specify rainfall losses using various infiltration models, such as Green–Ampt, Horton, SCS, and constant infiltration, all of them constant or spatially
• Ad hoc Numerical Scheme: Iber incorporates a specialized numerical scheme known as the DHD scheme, purpose-built for hydrological applications.
• Digital Terrain Model Enhancement: Iber provides utilities to efficiently smooth Digital Terrain Models, even when they may have poor quality or unfavorable conditions.
2.3. Design Criteria
The design of the drainage elements of the highway platform requires the establishment of particular parameters according to the regulations in force in each state. In the author’s opinion, the most
relevant parameters are as follows:
• Design storm. In this case study, rainfall is established for a return period of T = 25 years, as indicated in the Spanish Road Drainage Regulation (5.2-IC).
• Flow depth is circulating over the side gutter, encroaching into the shoulder. This parameter relates to the driver’s safety to the extent that a vehicle driving on the shoulder could present
stability problems due to aquaplaning [
]. This parameter depends on the dimensions of the prefabricated element that limits the flow, generally a barrier kerb of 10 cm high.
• Flow depth in the downspout drain inlet and in the downspout. This parameter is related to the maintenance of the road embankment to the extent that hypothetical drain overflows on the embankment
could affect its stability. The authors, based on their experience, consider that a minimum guard of 3 cm should be considered, bearing in mind the potential existence of obstacles or sediments.
• Cross-section choice. A standard section must be selected with the indications of the Project Technical Specifications. The analysis of the possible standard sections is beyond the scope of this
research; instead, a methodology where any cross-section can be considered is presented. Nevertheless, generally speaking, there are currently two possible cross sections, with or without a side
gutter. For the purposes of the present paper, the latter type of cross-section has been selected (
Figure 1
3. Case Study
3.1. Study Area
The case study presented in this paper is a synthetic stretch of highway fulfilling the characteristics indicated in the highway design regulations of the Spanish Ministry of Public Works and
Transport (Instruction 3.1-IC).
The choice of a synthetic case was determined by the geometric needs of the cases to be modeled. In other words, it is a straight section with a slope varying between 0% and 5% (i.e., the maximum
value allowed in the Spanish regulations for the design of highways). The curved section was not considered because the design radii of a dual carriageway are greater than 700 m, and the distance
between the straight and curved sections is negligible.
The geometric characteristics of the straight highway section are based on the indications of Instruction 3.1-IC, which are summarized in
Table 1
In addition to the above parameters, the gutter and platform drainage channel dimensions, the collection nozzle typology, and the embankment downspout spacing are involved. With regards to the gutter
and platform drainage channel dimensions, two alternatives with a width of 20 and 30 cm were analyzed.
Nozzles collecting the roadway water and discharging it to the embankment downspouts were modeled for two sizes and two different shapes (i.e., symmetrical and asymmetrical) (see
Figure 2
). It should be noted that the hydraulic capacity of symmetric nozzles is lower than that of asymmetric ones, so their use would only be justified in alignment changes (concave agreements coinciding
with slopes close to 0%) where flow reaches the nozzle from both directions.
Finally, regarding the dimensions of the embankment downspout itself, since these are prefabricated pieces, standard dimensions from the existing manufacturers were considered. From all of them, the
smallest one (30 cm wide) was selected since, as it will be later verified, it is sufficient to satisfy the conditions of the Spanish regulations for the design of highways.
Following the above conditions, a set of 72 different models combining all possible configurations was analyzed. These can be summarized as follows:
• Separation between embankment downspouts: 25 m, 20, and 15 m (three models)
• Longitudinal slope: 0.5% to 5% (six models)
• Gutter width: 30 cm and 20 cm (two models)
• Type of nozzle, called large and reduced (see dimensions in
Figure 2
) (two models)
3.2. Climate Information
In the present study, the same design hyetograph as in Aranda et al. [
] was used. Due to their torrential intensity, a typical highly convective storm was selected by the Júcar River Basin Authority (CHJ) through the Júcar Automatic Hydrological Information System
(SAIH). The design hyetograph was created from a dataset with 26 full years of 5-min precipitation records (1995–2020) at the Arquillo reservoir rain gauge located in Teruel (Spain).
3.3. Numerical Model and Domain Discretization
Regarding the model geometry, as commented above, it is a straight section of highway with the characteristics indicated in
Table 1
. However, for computing time purposes, it was necessary to find out the minimum length of the model to ensure its hydraulic stability. In this sense, a sensitivity analysis was carried out on the
number of embankment downspouts and the minimum distance between them.
As shown in
Figure 3
, a minimum number of five embankment downspouts is needed since the first three (i.e., B3, B4, and B5) do not have a full receiving basin. B1 and B2 are justified by the need to compare at least two
downspouts with the same receiving basin size and, thus, with a similar flow. Otherwise, the model would not be representative of the physical phenomenon to simulate, giving rise to each subsequent
element having to capture the runoff from its basin plus the excess of the previous one.
Table 2
Table 3
Table 4
show the drainage area at each downspout for different longitudinal slopes and downspout separations.
According to
Table 3
Table 4
Figure 3
, it can be seen that B3, B4, and B5 only capture the runoff from the full basin for certain longitudinal slopes, while B1 and B2 always receive the contribution of the entire basin. Therefore, with
this model setup, at least there are always two downspouts in such a way that the runoff volume drained through them can be quantified.
With regard to other relevant model parameters, it should be noted that a constant roughness was adopted for the entire surface (bituminous hot mixture) estimated according to the Manning coefficient
= 0.015). The SCS Curve Number Method was used as a rainfall–runoff model, with a curve number of 96. Finally, the mesh size of the hydraulic model is constant for all simulations: for the area
corresponding to the receiving basin (highway roadway), the elements are 40 × 20 cm; 15 × 15 cm elements for the nozzles; and 20 × 30 cm elements were selected for the embankment downspouts (
Figure 4
Table 5
summarizes the most relevant variables of the models used in this research according to downspout separation.
4. Results and Discussion
This section presents the results obtained through the 72 simulations conducted in the case study. The set of data obtained for each simulation corresponds to a two-dimensional model in such a way
that information on flow depth, level, and velocity, as well as the Froude number, is available on each mesh element, thus being able to evaluate the design criteria according to the adopted
4.1. Hydraulic Performance
The aim of this analysis is to evaluate the hydraulic performance of the drainage element, verifying that each downspout is capable of absorbing all the flow collected by each nozzle. Otherwise, if a
nozzle-downspout system is not capable of collecting all the flow coming from its basin, it would transfer additional flow to the next nozzle. This fact is aggravated as the length of the embankment
Table 6
shows the percentage of the evacuated volume of water with respect to the total volume for the 72 models analyzed for different downspout separations, gutter width, and nozzle type.
4.2. Nozzle Size
In this section, the effect of the size of the nozzle (
Figure 2
) is studied.
Figure 5
Figure 6
show the volume evacuated by each nozzle (considering equal drainage area) for both nozzle typologies and all longitudinally considered slopes.
As can be seen in
Figure 5
Figure 6
, when the slope is equal to or less than 2%, both nozzles have equal hydraulic capacity. However, for slopes greater than 2%, the small nozzle presents a decrease in its hydraulic capacity that
increases with the slope. This phenomenon can also be seen when comparing
Figure 5
Figure 6
. For small nozzles, the volume evacuated by B1 is higher than B2 due to the lower hydraulic capacity of the reduced nozzle, which causes an increase in the gutter water level and, thus, in the flow
evacuated by B1, since this is proportional to the water level.
4.3. Longitudinal Slope
This section shows the influence of the longitudinal slope on the elements that constitute the drainage of the platform: the gutter, nozzle, and downspout.
Table 7
Table 8
show the maximum water depth reached in the model with a 30 cm wider gutter, combining different longitudinal slopes and embankment downspout distancing for a large and a reduced nozzle,
Table 9
Table 10
show the maximum water depth reached in the model with a 20 cm wider gutter, combining different longitudinal slopes and embankment downspout distancing for a large and a reduced nozzle,
In view of the results, it can be concluded that the 30 cm wide downspout satisfactorily meets any type of design; that is, it is not a conditioning element, reaching maximum values of 6.3 cm for a
minimum slope of 0.5%, a 20 cm wide gutter, and a large nozzle.
Finally, the slope considerably influences the nozzle water depths. For low slopes (<2%) and low velocities, the flow follows the edge of the nozzle inlet. However, for slopes greater than 3%, the
flow is concentrated on the opposite edge of the inlet (
Figure 7
4.4. Gutter Width and Embankment Downspout Distancing
The influence of the gutter width and the embankment downspout distancing on the gutter are those associated with the flow velocity and depth in a channel. As the slope increases, there is a decrease
in the water depth (
Figure 8
) and, consequently, an increase in velocity.
This information is relevant since it conditions the suitability of the design because, according to Instruction 5.2-IC, this maximum water depth must have a guard on the shoulder platform greater
than 5 cm.
Thus, for a 30 cm wide gutter, the guard is variable depending on the slope. As a summary, it can be concluded that, for slopes of less than 1%, downspout separations of 15 m can be used for both
reduced and large nozzles, and if the separation is 20 m, only large nozzles should be used. For slopes greater than 2%, downspouts must be installed every 15 or 20 m with reduced or large nozzles.
Notwithstanding, for a 20 cm wide gutter, the scenario is different (
Figure 9
). Separations of 15 m can be used with large nozzles for slopes of less than 1% and reduced nozzles for slopes between 1% and 5%. For separations of 20 m between downspouts, large nozzles can be
used at 1.5% and reduced nozzles at slopes greater than 3.5%. Lastly, separations between downspouts of 25 m can only be used with large nozzles and with slopes greater than 2.5%.
4.5. Nozzle Type: Symmetrical and Asymmetrical
In this section, the influence of the nozzle type is analyzed: symmetrical vs. asymmetrical (
Figure 2
) performance was compared in a model with 3 m downspouts with a 20 m separation between them.
The results indicate that for slopes ≤ 1%, the hydraulic capacity obtained in both cases is practically the same. However, for greater slopes, the hydraulic capacity of the symmetrical nozzle
decreases progressively (
Table 11
Figure 10
shows the comparison between the longitudinal profile in the gutter when symmetrical and asymmetrical nozzles are arranged for a longitudinal slope of 5%, where a considerable increase in the water
depth is observed at the exit of the symmetrical nozzle compared to the asymmetrical one. The water depth in the gutter increases slightly on slopes ≥ 3%, and it presents a better distribution in the
nozzle on the asymmetrical one for all slopes (
Figure 11
Figure 12
For the application to a practical case, the authors recommend the creation of a synthetic model in accordance with the geometry and the hydrological characteristics of the site, such as a design
hyetograph with a time resolution of five minutes (according to the concentration times of each downspout catchment). Regarding the former, it is necessary to highlight the importance of an adequate
design of the width of the road, shoulders, gutter, and slope, which depends on the section. The variation in the slope is easily modifiable since once the model is available, it is only necessary to
assign heights from an external file (e.g., raster or meshing point cloud).
With all the above, the proposed methodology is easily applicable to any type of road to be dimensioned. Furthermore, this methodology is also applicable for infrastructure already built in order to
determine the degree of efficiency of the existing drainage system, thus anticipating future problems related to the road safety of the platform.
The methodology presented here addresses the problem of platform drainage through a rigorous mathematical model. The authors, experts in hydraulic calculation, have reviewed the current calculation
methods, reaching the conclusion that the current methodologies are based on empirical formulas to obtain the water depth and flow in each of the designed downspouts. However, the methodology
proposed here provides water depth, speed, and flow at any point of the model in a rigorous way.
5. Conclusions
This work explores the design of drainage downspout systems on road embankments through two-dimensional hydraulic modeling. A methodology that enables an efficient design of both the outlet and the
spacing between downspouts was proposed, facilitating an optimal reduction in the runoff on the platform.
The application of this methodology revealed that the volume of runoff evacuated is greater with lower longitudinal slopes and with lower separations between downspouts. In all cases studied, a
gutter width of 30 cm and a large nozzle ensure total runoff evacuation, even for separations of up to 30 m.
The influence of the longitudinal slope on the drainage elements has been studied, revealing that the 30 cm wide downspout satisfactorily meets the requirements in any design typology, while the
depths of the nozzles vary depending on the slope. In such cases, the water depth reaches maximum values of 6.3 cm for a minimum longitudinal slope of 0.5%.
The influence of nozzle size shows that for slopes greater than 2%, smaller nozzles have lower hydraulic capacity than the larger ones. A 30 cm wide gutter with slopes lower than 1% and downspout
separations of 15 m can be used for any size of nozzles, while a 20 m separation should be used only for large nozzles. Where slopes are greater than 2%, downspouts must be installed every 15–20 m.
On the other hand, for a 20 cm wide gutter and a downspout separation of 15 m, large nozzles can be used for slopes lower than 1% and reduced nozzles for slopes between 1% and 5%. In the case of
downspout separation of 20 m between, large nozzles can be used for slopes of 1.5% and reduced nozzles for slopes greater than 3.5%. Lastly, separations between downspouts of 25 m can only be used
with large nozzles with slopes greater than 2.5%.
Author Contributions
Conceptualization, J.Á.A.; Methodology, M.S.-R.; Validation, M.S.-J.; Formal analysis, M.S.-J. and C.B.; Investigation, J.Á.A.; Data curation, J.Á.A.; Writing—original draft, M.S.-J. and C.B.;
Writing—review & editing, M.S.-R.; Visualization, C.B.; Supervision, J.Á.A. All authors have read and agreed to the published version of the manuscript.
This research received no external funding.
Data Availability Statement
Not applicable.
Conflicts of Interest
The authors declare no conflict of interest.
1. Maze, T.H.; Agarwal, M.; Burchett, G. Whether weather matters to traffic demand, traffic safety, and traffic operations and flow. Transp. Res. Rec. 2006, 1948, 170–176. [Google Scholar] [CrossRef
2. Hu, S.; Lin, H.; Xie, K.; Dai, J.; Qui, J. Impacts of Rain and Waterlogging on Traffic Speed and Volume on Urban Roads. IEEE Conf. Intell. Transp. Syst. Proc. ITSC 2018, 2018, 2943–2948. [Google
3. Chesterton, J.; Nancekivell, N.; Tunnicliffe, N. The Use of the Gallaway Formula for Aquaplaning Evaluation in New Zealand. In Proceedings of the NZIHT Transit NZ 8th Annual Conference, Auckland,
New Zealand, 15–17 October 2006; Volume 2006, p. 22. [Google Scholar]
4. Burlacu, A.; Răcănel, C.; Burlacu, A. Preventing aquaplaning phenomenon through technical solutions. J. Croat. Assoc. Civ. Eng. 2018, 70, 1057–1064. [Google Scholar]
5. Brown, S.A.; Schall, J.D.; Morris, J.L.; Doherty, C.L.; Stein, S.M.; Warner, J.C. Urban Drainage Design Manual: Hydraulic Engineering Circular 22; National Highway Institute: Fort Collins, CO,
USA, 2013.
6. Orden FOM/298/2016 de 15 de Febrero. Norma 5.2–IC de Drenaje Superficial de la Instrucción de Carreteras. 1990. Available online: https://www.mitma.gob.es/recursos_mfom/ordenfom_298_2016.pdf
(accessed on 6 October 2023).
7. Austroads. Drainage—Road Surface, Networks, Basins and Subsurface; Austroads Ltd.: Sidney, Australia, 2021. [Google Scholar]
8. Ong, G.P.; Fwa, T.F. Wet-Pavement Hydroplaning Risk and Skid Resistance: Modeling. J. Transp. Eng. 2007, 133, 590–598. [Google Scholar] [CrossRef]
9. Home, W.B.; Dreher, R.C. Phenomana of Pneumatic Tire Hydroplaning; National Aeronautics and Space Administration: Washington, DC, USA, 1963; Volume 2056.
10. Huebner, R.S.; Reed, J.R.; Henry, J.J. Criteria for Predicting Hydroplaning Potential. J. Transp. Eng. 1986, 112, 549–553. [Google Scholar] [CrossRef]
11. Gallaway, B.M.; Ivey, D.L.; Hayes, G.; Ledbetter, W.B.; Olson, R.M.; Woods, D.L.; Schiller, R.F., Jr. Pavement and Geometric Design Criteria for Minimizing Hydroplaning; Publication
FHWA-RD-79-31; FHWA, U.S. Department of Transportation: Washington, DC, USA, 1979.
12. Wong, T.S.W. Kinematic wave method for determination of road drainage inlet spacing. Adv. Water Resour. 1994, 17, 329–336. [Google Scholar] [CrossRef]
13. Ku, H.J.; Jun, K.S. Design of road surface drainage facilities based on varied flow analysis. In Advances in Water Resources and Hydraulic Engineering: Proceedings of 16th IAHR-APD Congress and
3rd Symposium of IAHR-ISHS; Springer: Berlin/Heidelberg, Germany, 2009; pp. 240–245. [Google Scholar]
14. Nicklow, J.W.; Hellman, A.P. Optimal design of storm water inlets for highway drainage. J. Hydroinform. 2004, 6, 245–257. [Google Scholar] [CrossRef]
15. Jo, J.; Kwak, C.; Kim, J.; Kim, S. Deriving Optimal Analysis Method for Road Surface Runoff with Change in Basin Geometry and Grate Inlet Installation. Water 2022, 14, 3132. [Google Scholar] [
16. Han, S.; Xu, J.; Yan, M.; Gao, S.; Li, X.; Huang, X.; Liu, Z. Predicting the water film depth: A model based on the geometric features of road and capacity of drainage facilities. PLoS ONE 2021,
16, e0252767. [Google Scholar] [CrossRef]
17. Li, X.; Fang, X.; Chen, G.; Gong, Y.; Wang, J.; Li, J. Evaluating curb inlet efficiency for urban drainage and road bioretention facilities. Water 2019, 11, 851. [Google Scholar] [CrossRef]
18. Delestre, O.; Darboux, F.; James, F.; Lucas, C.; Laguerre, C.; Cordier, S. FullSWOF: A free software package for the simulation of shallow water flows. arXiv 2014, arXiv:1401.4125. [Google
19. Aranda, J.Á.; Beneyto, C.; Sánchez-Juny, M.; Bladé, E. Efficient Design of Road Drainage Systems. Water 2021, 13, 1661. [Google Scholar] [CrossRef]
20. Elfeki, A.M.; Ewea, H.A.; Al-Amri, N.S. Development of storm hyetographs for flood forecasting in the Kingdom of Saudi Arabia. Arab. J. Geosci. 2014, 7, 4387–4398. [Google Scholar] [CrossRef]
21. Pan, C.; Wang, X.; Liu, L.; Huang, H.; Wang, D. Improvement to the huff curve for design storms and urban flooding simulations in Guangzhou, China. Water 2017, 9, 411. [Google Scholar] [CrossRef]
22. Bhunya, P.K. Synthetic Unit Hydrograph Methods: A Critical Review. Open Hydrol. J. 2011, 5, 1–8. [Google Scholar] [CrossRef]
23. Singh, P.K.; Mishra, S.K.; Jain, M.K. A review of the synthetic unit hydrograph: From the empirical UH to advanced geomorphological methods. Hydrol. Sci. J. 2014, 59, 239–261. [Google Scholar] [
24. Chow, V.T.; Maidment, D.R.; Mays, L.W. Applied Hydrology; McGraw-Hill: New York, NY, USA, 1998; ISBN 0-07-100174-3. [Google Scholar]
25. Chimene, C.A.; Campos, J.N.B. The design flood under two approaches: Synthetic storm hyetograph and observed storm hyetograph. J. Appl. Water Eng. Res. 2020, 8, 171–182. [Google Scholar] [
26. Bladé, E.; Cea, L.; Corestein, G.; Escolano, E.; Puertas, J.; Vázquez-Cendón, E.; Dolz, J.; Coll, A. Iber: River flow numerical simulation tool. Rev. Int. Métodos Numéricos Para Cálculo Diseño
Ing. 2014, 30, 1–10. [Google Scholar] [CrossRef]
27. Bladé Castellet, E.; Cea, L.; Corestein, G.; Bladé, E.; Cea, L.; Corestein, G. Numerical modelling of river inundations. Ing. Agua 2014, 18, 68. [Google Scholar] [CrossRef]
28. Bladé, E.; Sánchez-Juny, M.; Arbat, M.; Dolz, J. Computational Modeling of Fine Sediment Relocation Within a Dam Reservoir by Means of Artificial Flood Generation in a Reservoir Cascade. Water
Resour. Res. 2019, 55, 3156–3170. [Google Scholar] [CrossRef]
29. Cea, L.; Bladé, E.; Corestein, G.; Fraga, I.; Espinal, M.; Puertas, J. Comparative analysis of several sediment transport formulations applied to dam-break flows over erodible beds. In
Proceedings of the EGU General Assembly 2014, Vienna, Austria, 27 April–2 May 2014. [Google Scholar]
30. Corestein, G.; Bladé, E.; Niñerola, D. Modelling bedload transport for mixed flows in presence of a non-erodible bed layer. In Proceedings of the River Flow 2014; CRC Press: Boca Raton, FL, USA,
2014; pp. 1611–1618. [Google Scholar]
31. González-Aguirre, J.C.; Vázquez-Cendón, M.E.; Alavez-Ramírez, J. Simulación numérica de inundaciones en Villahermosa México usando el código IBER. Ing. Agua 2016, 20, 201. [Google Scholar] [
32. García-Alén, G.; García-Fonte, O.; Cea, L.; Pena, L.; Puertas, J. Modelling Weirs in Two-Dimensional Shallow Water Models. Water 2021, 13, 2152. [Google Scholar] [CrossRef]
33. Areu-Rangel, O.; Cea, L.; Bonasia, R.; Espinosa-Echavarria, V. Impact of Urban Growth and Changes in Land Use on River Flood Hazard in Villahermosa, Tabasco (Mexico). Water 2019, 11, 304. [Google
Scholar] [CrossRef]
34. Cea, L.; Álvarez, M.; Puertas, J. Estimation of flood-exposed population in data-scarce regions combining satellite imagery and high resolution hydrological-hydraulic modelling: A case study in
the Licungo basin (Mozambique). J. Hydrol. Reg. Stud. 2022, 44, 101247. [Google Scholar] [CrossRef]
35. Cea, L.; Bladé, E. A simple and efficient unstructured finite volume scheme for solving the shallow water equations in overland flow applications. Water Resour. Res. 2015, 51, 5464–5486. [Google
Scholar] [CrossRef]
36. García-Alén, G.; Hostache, R.; Cea, L.; Puertas, J. Joint assimilation of satellite soil moisture and streamflow data for the hydrological application of a two-dimensional shallow water model. J.
Hydrol. 2023, 621, 129667. [Google Scholar] [CrossRef]
37. Cea, L.; Bermúdez, M.; Puertas, J.; Bladé, E.; Corestein, G.; Escolano, E.; Conde, A.; Bockelmann-Evans, B.; Ahmadian, R. IberWQ: New simulation tool for 2D water quality modelling in rivers and
shallow estuaries. J. Hydroinform. 2016, 18, 816–830. [Google Scholar] [CrossRef]
38. Anta Álvarez, J.; Bermúdez, M.; Cea, L.; Suárez, J.; Ures, P.; Puertas, J. Modelización de los impactos por DSU en el río Miño (Lugo). Ing. Agua 2015, 19, 105. [Google Scholar] [CrossRef]
39. Ruiz Villanueva, V.; Bladé Castellet, E.; Díez-Herrero, A.; Bodoque, J.M.; Sánchez-Juny, M. Two-dimensional modelling of large wood transport during flash floods. Earth Surf. Process. Landf. 2014
, 39, 438–449. [Google Scholar] [CrossRef]
40. Quiniou, M.; Piton, G.; Villanueva, V.R.; Perrin, C.; Savatier, J.; Bladé, E. Large Wood Transport-Related Flood Risks Analysis of Lourdes City Using Iber-Wood Model. In Advances in
Hydroinformatics: Models for Complex and Global Water Issues—Practices and Expectations; Springer Nature Singapore: Singapore, 2022; pp. 481–498. [Google Scholar]
41. Sanz-Ramos, M.; Bladé, E.; Silva-Cancino, N.; Salazar, F.; López-Gómez, D.; Martínez-Gomariz, E. A Probabilistic Approach for Off-Stream Reservoir Failure Flood Hazard Assessment. Water 2023, 15,
2202. [Google Scholar] [CrossRef]
42. Álvarez, M.; Puertas, J.; Peña, E.; Bermúdez, M. Two-Dimensional Dam-Break Flood Analysis in Data-Scarce Regions: The Case Study of Chipembe Dam, Mozambique. Water 2017, 9, 432. [Google Scholar]
43. Sopelana, J.; Cea, L.; Ruano, S. Determinación de la inundación en tramos de ríos afectados por marea basada en la simulación continúa de nivel. Ing. Agua 2017, 21, 231. [Google Scholar] [
44. Sanz-Ramos, M.; López-Gómez, D.; Bladé, E.; Dehghan-Souraki, D. A CUDA Fortran GPU-parallelised hydrodynamic tool for high-resolution and long-term eco-hydraulic modelling. Environ. Model. Softw.
2023, 161, 105628. [Google Scholar] [CrossRef]
45. Sañudo, E.; Cea, L.; Puertas, J. Modelling Pluvial Flooding in Urban Areas Coupling the Models Iber and SWMM. Water 2020, 12, 2647. [Google Scholar] [CrossRef]
46. Sanz-Ramos, M.; Olivares, G.; Bladé, E. Experimental characterization and two-dimensional hydraulic-hydrologic modelling of the infiltration process through permeable pavements. Rev. Int. Métodos
Numéricos Para Cálculo Diseño Ing. 2022, 38. [Google Scholar] [CrossRef]
47. Sanz-Ramos, M.; Bladé, E.; Oller, P.; Furdada, G. Numerical modelling of dense snow avalanches with a well-balanced scheme based on the 2D shallow water equations. J. Glaciol. 2023, 1–17. [Google
Scholar] [CrossRef]
48. Sanz-Ramos, M.; Andrade, C.A.; Oller, P.; Furdada, G.; Bladé, E.; Martínez-Gomariz, E. Reconstructing the Snow Avalanche of Coll de Pal 2018 (SE Pyrenees). GeoHazards 2021, 2, 196–211. [Google
Scholar] [CrossRef]
49. Toro, E.F. Riemann Solvers and Numerical Methods for Fluid Dynamics; Springer: Berlin/Heidelberg, Germany, 2009; Volume 40, ISBN 978-3-540-25202-3. [Google Scholar]
50. Roe, P.L. A basis for the upwind differencing of the two-dimensional unsteady Euler equations. Numer. Methods Fluid Dyn. 1986, 2, 55–80. [Google Scholar]
51. Guo, J.C.Y. Design of Street Curb Opening Inlets Using a Decay-Based Clogging Factor. J. Hydraul. Eng. 2006, 132, 1237–1241. [Google Scholar] [CrossRef]
52. Mukherjee, M.D. Highway Surface Drainage System & Problems of Water Logging In Road Section. Int. J. Eng. Sci. 2014, 3, 44–51. [Google Scholar]
Figure 2. Type of nozzles modeled in this research. Above: asymmetrical. Below: symmetrical. Left: large nozzles (LN). Right: reduced nozzles (RN). Blue arrows indicate the flow direction in the
gutter and the downspout.
Figure 7. Maximum water depth in the nozzle for different longitudinal slopes for a 30 cm gutter and a large nozzle.
Figure 8. Maximum water depth (cm) for the 30 cm wide gutter for different slopes and types of nozzle (the black dotted line indicates the minimum guard that the sheet of water can reach according to
the Spanish standard 5.2-IC).
Figure 9. Maximum water depth (cm) for the 20 cm wide gutter for different slopes and types of nozzles (the black dotted line indicates the minimum guard that the sheet of water can reach according
to the Spanish standard 5.2-IC).
Figure 10. Longitudinal profile for a hydraulic model with 20 m downspout spacing, a 30 cm wide gutter, and large asymmetric and symmetric nozzles.
Figure 11. Water depth and velocity for a symmetrical nozzle with a downspout separation of 20 m, a longitudinal slope of 5%, and an gutter width of 30 cm. The yellow arrows represent the direction
and magnitude of velocity vectors in each point.
Figure 12. Water depth and velocity for an asymmetrical nozzle with a downspout separation of 20 m, a longitudinal slope of 5%, and a gutter width of 30 cm. The yellow arrows represent the direction
and magnitude of velocity vectors in each point.
Inner Shoulder Inner Road Lane Outer Road Lane Outer Shoulder Berm Cross Slope (i[t]) Long. Slope (i[l])
1.0 m 3.5 m 3.5 m 2.5 m 1.5 m 2% 0–5%
Table 2. Drainage area for each embankment downspout (B) (m^2) for a separation of 25 m, depending on the longitudinal slope of the road (i[l]).
Slope (i[l]) B1 B2 B3 B4 B5
0.5% 262.75 262.75 262.75 262.75 40.17
1% 262.75 262.75 262.75 262.52 25.25
2% 262.75 262.75 262.75 246.31 11.52
3% 262.75 262.75 262.75 220.92 6.97
4% 262.75 262.75 262.75 193.22 4.73
5% 262.75 262.75 262.75 164.61 3.41
Table 3. Drainage area for each embankment downspout (B) (m^2) for a separation of 20 m, depending on the longitudinal slope of the road (i[l]).
Slope (i[l]) B1 B2 B3 B4 B5
0.5% 210.06 210.06 210.06 210.06 40.17
1% 210.06 210.06 210.06 210.06 25.25
2% 210.06 210.06 210.06 193.79 11.52
3% 210.06 210.06 210.06 168.40 6.97
4% 210.06 210.06 210.06 141.39 4.05
5% 210.06 210.06 209.53 113.00 3.40
Table 4. Drainage area for each embankment downspout (B) (m^2) for a separation of 15 m, depending on the longitudinal slope of the road (i[l]).
Slope (i[l]) B1 B2 B3 B4 B5
0.5% 157.55 157.55 157.55 157.55 40.17
1% 157.55 157.55 157.55 157.49 25.25
2% 157.55 157.55 157.55 141.28 11.52
3% 157.55 157.55 157.55 168.40 6.97
4% 157.55 157.55 156.86 88.75 4.73
5% 157.55 157.55 146.72 69.75 3.40
Downspout Longitude (m) Width (m) Surface Area (m^2) Volume (m^3)
Separation (m)
25 108 10.5 1134.3 171.2
20 88 10.5 924.27 140.04
15 68 10.5 714.21 109.06
Table 6. Percentage of evacuated volume of water with respect to the total volume for the 72 models analyzed for different downspout separations, gutter width, and nozzle type.
Downspout Separation 25 m 20 m 15 m
Gutter Width 20 cm 30 cm 20 cm 30 cm 20 cm 30 cm
Nozzle Type RN LN RN LN RN LN RN LN RN LN RN LN
0.5 * 99 * 99 100 100 100 100 100 100 100 100 100 100
1 * 99 * 99 * 98 100 * 95 100 100 100 100 100 100 100
Longitudinal slope (i[l]) 2 * 96 * 99 * 96 100 * 95 100 * 97 100 * 99 100 100 100
3 * 93 * 99 * 92 100 * 92 100 * 94 100 * 96 100 * 98 100
4 * 89 * 98 * 88 100 * 88 100 * 91 100 * 94 100 * 98 100
5 * 84 * 97 * 82 * 99 * 86 98 * 86 100 * 91 100 * 97 100
* Not suitable design; RN: Reduced nozzle; LN: Large nozzle.
Table 7. Maximum water depths (in cm) for the different drainage elements of the platform with a large nozzle and a 30 cm wide gutter.
25 m Downspout Separation 20 m Downspout Separation 15 m Downspout
Slope Gutter Nozzle Downspout Gutter Nozzle Downspout Gutter Nozzle Downspout
0.5% 7.5 8.2 6.6 6.5 7.5 6.1 5.4 7.0 5.0
1% 6.9 7.5 6.2 6.0 7.0 6.0 5.2 6.0 5.0
2% 6.0 7.0 6.0 5.6 6.7 5.6 5.0 6.0 5.0
3% 5.7 7.3 5.8 5.3 6.0 5.2 4.9 6.0 5.0
4% 5.3 7.1 5.7 4.8 5.9 5.0 4.6 5.0 4.0
5% 5.0 6.9 5.5 4.6 5.8 4.8 4.0 5.0 4.0
Table 8. Maximum water depths (in cm) for the different drainage elements of the platform with a reduced nozzle and a 30 cm wide gutter.
25 m Downspout Separation 20 m Downspout Separation 15 m Downspout
Slope Gutter Nozzle Downspout Gutter Nozzle Downspout Gutter Nozzle Downspout
0.5% 7.9 7.1 3.8 7.0 6.1 3.3 5.3 4.8 2.6
1% 7.7 6.6 3.7 6.7 5.7 3.1 5.1 4.5 2.5
2% 7.5 7.4 3.8 6.5 6.1 3.0 5.0 4.5 2.6
3% 7.1 8.7 3.6 6.2 7.0 3.0 5.0 5.9 2.5
4% 6.8 9.7 3.4 5.8 7.8 3.0 4.7 6.4 2.3
5% 6.6 10.0 3.3 5.7 8.9 3.0 4.4 8.1 2.6
Table 9. Maximum water depths (in cm) for the different drainage elements of the platform with a large nozzle and a 20 cm wide gutter.
25 m Downspout Separation 20 m Downspout Separation 15 m Downspout
Slope Gutter Nozzle Downspout Gutter Nozzle Downspout Gutter Nozzle Downspout
0.5% 8.9 7.2 6.3 7.2 6.6 5.2 6.5 6.1 4.3
1% 7.8 6.4 5.5 6.8 6.3 4.8 6.0 5.4 4.6
2% 6.9 6.1 5.3 6.0 6.1 4.5 4.9 4.9 4.1
3% 6.3 6.0 5.3 5.2 6.0 4.4 4.3 4.4 4.1
4% 6.0 5.8 5.1 4.9 5.8 4.3 4.0 4.2 3.8
5% 5.8 6.2 4.9 4.7 5.6 4.2 3.9 4.1 3.8
Table 10. Maximum water depths (in cm) for the different drainage elements of the platform with a reduced nozzle and a 20 cm wide gutter.
25 m Downspout Separation 20 m Downspout Separation 15 m Downspout
Slope Gutter Nozzle Downspout Gutter Nozzle Downspout Gutter Nozzle Downspout
0.5% 9.5 6.2 3.3 7.9 4.8 2.6 6.8 5.4 3.1
1% 9.3 6.2 3.6 7.8 5.2 3.0 6.5 4.5 2.4
2% 8.1 7.1 3.4 7.0 6.6 2.8 6.1 4.5 2.5
3% 7.6 8.0 3.3 6.6 7.0 3.0 5.4 5.3 2.3
4% 7.3 8.8 3.2 6.4 7.3 2.7 5.1 5.9 2.2
5% 7.2 9.5 3.1 6.3 7.9 2.6 5.2 7.4 2.8
20 m Downspout Distancing—30 cm Width Gutter
% of Evacuated Volume Gutter Water Depth Nozzle Water Depth
Slope S A S A S A
0.5% 100 100 6.5 6.5 7.0 6.1
1% 100 100 6.0 6.3 7.3 6
2% 99 100 5.0 5.6 7.2 5.6
3% 93 100 5.2 5.3 7.5 5.2
4% 81 100 5.8 4.8 7.6 5
5% 66 100 6.2 4.6 8.1 4.8
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s).
MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:
Share and Cite
MDPI and ACS Style
Aranda, J.Á.; Sánchez-Juny, M.; Sanz-Ramos, M.; Beneyto, C. Design of Drainage Downspouts Systems over a Road Embankment. Water 2023, 15, 3529. https://doi.org/10.3390/w15203529
AMA Style
Aranda JÁ, Sánchez-Juny M, Sanz-Ramos M, Beneyto C. Design of Drainage Downspouts Systems over a Road Embankment. Water. 2023; 15(20):3529. https://doi.org/10.3390/w15203529
Chicago/Turabian Style
Aranda, José Ángel, Martí Sánchez-Juny, Marcos Sanz-Ramos, and Carles Beneyto. 2023. "Design of Drainage Downspouts Systems over a Road Embankment" Water 15, no. 20: 3529. https://doi.org/10.3390/
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
Article Metrics
|
{"url":"https://www.mdpi.com/2073-4441/15/20/3529?utm_campaign=releaseissue_waterutm_medium=emailutm_source=releaseissueutm_term=doilink77","timestamp":"2024-11-04T00:49:51Z","content_type":"text/html","content_length":"512280","record_id":"<urn:uuid:798ffef4-8374-4fac-9917-9d75ee50dc47>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00700.warc.gz"}
|
Exploring How Universe Works
Here I try to present (and explore) various aspects of how Universe works. This includes looking at polyhedra (an indication of the shape of space), dynamics, some mathematics, some physics, etc.
Also included is my work with Lynnclaire Dennis in which I try to create computer graphics to help illustrate what she describes.
Dirac String Trick.: - Here is how I visualize the motion of the strings for programming the Dirac String Trick. Some speculate that this motion is connected with spin-1/2 particles. (See WSM below
for more on this topic.) (08-29-2008)
Wave Structure of Matter model.: - Animations and descriptions to help clarify and extend Milo Wolff's WSM. (06-03-2008)
Modeling 5 Octa: - Looking at how to build a model of the 5 intersecting Octahedra. (03-13-2004)
Jitterbug triangle orientations: - I find that the Jitterbug's triangle in the Dodecahedron position is a 60° rotation from the Icosahedron position, exactly. This forms a Star of David intersection
of triangles. (07-02-2003)
10 Octahedra defined Icosahedra in matrix: - Since there are 5 Octahedra in the basic polyhedra matrix, and there are 2 Icosahedra per Octahedron, there are 10 Icosahedra within the Octahedra space.
Marvin Solit's Polyhedra Work: - I am looking into Marvin Solit's 30-Verti and other polyhedra and I am helping him document his discoveries. Along the way, I add my own discoveries and connections.
Dual Polyhedra Motions - Here I look at two different kinds of motion of the Platonic polyhedra. (06-23-2002)
Limits vs Levels - Here I show that what is perceived as a maximum is but a minimum from another point of view. What is the end is but a start. Possible geometric model of "enlightenment"?
Cone Angles - Here I show that some angles calculated from Physics matches cone angles generated by the Jitterbug motion. (06-09-2002)
Golden Ratio in 3-, 6-fold Geometry - I thought the Golden Ratio only occurred in 5-fold (pentagon, pentagram) geometry. Here it is using 3- and 6-fold geometry. (05-29-2002)
Path Curve Geometry - From Lawrence Edwards' book "The vortex of life : nature's patterns in space and time" I learned a little but about projective geometry and Path Curves. The shapes generated
relate to pine cones, flower and tree buds, eggs, embryos, etc. Relates to intersecting vortices: a theme I am finding over and over again in Universe. (2001)
Lynnclaire Dennis' Geometry - Lynnclaire Dennis had several near death experiences and "brought back" several interesting bits of geometric information which I am helping to identify and describe.
An Introduction to Polyhedra and the Jitterbug - Here is a web version of a presentation I gave at the State University of New York, Oswego, Department of Technology's 63rd Annual Fall Conference,
Oct. 25-26, 2001.
Usage Note: My work is copyrighted. You may reference and use my work in non-profit projects but you may not include my work, or parts of it, in any for-profit project without my consent.
Last updated: 06-03-2008
|
{"url":"http://www.rwgrayprojects.com/Universe/universe.html","timestamp":"2024-11-06T04:54:48Z","content_type":"text/html","content_length":"6919","record_id":"<urn:uuid:3c68a0a3-9c38-47c7-8077-94ea4ff79db6>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00068.warc.gz"}
|
What do we mean by "statistics"?
Ron Smith, Birkbeck, University of London
Put online February 2010
This text is extracted from Applied Statistics and Econometrics: Notes and Exercises, one of the course documents shared through the TRUE wiki for Econometrics. It is available under a Creative
Commons license, some rights reserved.
The word statistics has at least three meanings. Firstly, it is the data themselves, e.g. the numbers that the Office of National Statistics collects. Secondly, it has a technical meaning as measures
calculated from the data, e.g. an average. Thirdly, it is the academic subject which studies how we make inferences from the data.
Descriptive statistics provide informative summaries (e.g. averages) or presentations (e.g. graphs) of the data. We will consider this type of statistics first.
Whether a particular summary of the data is useful or not depends on what you want it for. You will have to judge the quality of the summary in terms of the purpose for it is used. Different
summaries are useful for different purposes.
Statistical inference starts from an explicit probability model of how the data were generated. For instance, an empirical demand curve says quantity demanded depends on income, price and random
factors, which we model using probability theory. The model often involves some unknown parameters, such as the price elasticity of demand for a product. We then ask how to get an estimate of this
unknown parameter from a sample of observations on price charged and quantity sold of this product. There are usually lots of different ways to estimate the parameter and thus lots of different
estimators: rules for calculating an estimate from the data. Some ways will tend to give good estimates, some bad, so we need to study the properties of different estimators. Whether a particular
estimator is good or bad depends on the purpose.
For instance, there are three common measures (estimators) of the typical value (central tendency) of a set of observations: the arithmetic mean or average; the median, the value for which half the
observations lie above and half below; and the mode, the most commonly occurring value. These measure different aspects of the distribution and are useful for different purposes. For many economic
measures, like income, these measures can be very different. Be careful with averages. If we have a group of 100 people, one of whom has had a leg amputated, the average number of legs is 1.99. Thus
99 out of 100 people have an above average number of legs. Notice, in this case the median and modal number of legs is two.
We often want to know how dispersed the data are, the extent to which it can differ from the typical value. A simple measure is the range, the difference between the maximum and minimum value, but
this is very sensitive to extreme values and we will consider other measures below.
Sometimes we are interested in a single variable, e.g. height, and consider its average in a group and how it varies in the group. This is univariate statistics, to do with one variable. Sometimes,
we are interested in the association between variables: how does weight vary with height? or how does quantity vary with price? This is multivariate statistics, more than one variable is involved and
the most common models of association between variables are correlation and regression, covered below.
A model is a simplified representation of reality. It may be a physical model, like a model airplane. In economics, a famous physical model is the Phillips Machine, now in the Science Museum, which
represented the flow of national income by water going through transparent pipes. Most economic models are just sets of equations. There are lots of possible models and we use theory (interpreted
widely to include institutional and historical information) and statistical methods to help us choose the best model of the available data for our particular purpose. The theory also helps us
interpret the estimates or other summary statistics that we calculate.
Doing applied quantitative economics or finance, usually called econometrics, thus involves a synthesis of various elements. We must be clear about why we are doing it: the purpose of the exercise.
We must understand the characteristics of the data and appreciate their weaknesses. We must use theory to provide a model of the process that may have generated the data. We must know the statistical
methods which can be used to summarise the data, e.g. in estimates. We must be able to use the computer software that helps us calculate the summaries. We must be able to interpret the summaries in
terms of our original purpose and the theory.
Logos remain the property of their respective institutions and organisations, all rights reserved.
|
{"url":"https://economicsnetwork.ac.uk/true_showcase/statistics_econometrics","timestamp":"2024-11-06T08:45:03Z","content_type":"text/html","content_length":"16940","record_id":"<urn:uuid:a0423e24-a541-4823-bb4d-580a117abccb>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00737.warc.gz"}
|
Feedback Through Nodal Attribute Changes
31 Feedback Through Nodal Attribute Changes
This next tutorial will briefly demonstrate how changes to nodal attributes over time are another mechanism that results in feedback between the epidemic system and the dynamic network structure. In
the context of diseases like HIV, partnership selection and activity level may vary based on disease status (more specifically, diagnosed disease status but we’ll ignore that complication in this
model). Because disease status can change over time, any network model that references the disease status attribute must then be resimulated when the nodal attributes change. And these nodal
attributes for disease status change as a function of the infections that results from the network structure. Therefore, we have feedback! This tutorial will walk through the parameterization and
implications of this model.
31.1 Network Model
First, we will set up our network and then calculate our target statistics under the parameterization that people with infection have a lower mean degree than those without infection. This may be a
function of behavioral change following disease diagnosis, which is commonly observed for HIV. We are not yet considering how they match up.
31.1.1 Parameterization
The initial prevalence in the network will be 20%. Note that we need to specify a vertex attribute, which must be called status, for this model. This is another “special” attribute on the network,
like group, that is treated differently from any other named attribute. That is because status is the main attribute within netsim that keeps track of the individual disease status. We need to
initialize the network like this with status as an individual attribute specifically because it will be a term within our TERGM; therefore, we will not randomly set the infected number with init.net
as we have in previous tutorials.
Here, we assign exactly 100 nodes as infected, and randomly select which nodes those will be.
prev <- 0.2
infIds <- sample(1:n, n*prev)
nw <- set_vertex_attribute(nw, "status", "s")
nw <- set_vertex_attribute(nw, "status", "i", infIds)
get_vertex_attribute(nw, "status")
[1] "s" "s" "s" "s" "s" "s" "s" "s" "i" "s" "s" "s" "s" "s" "s" "s" "i" "s"
[19] "s" "s" "s" "s" "s" "s" "s" "s" "i" "s" "s" "s" "s" "s" "s" "s" "s" "i"
[37] "i" "s" "s" "s" "s" "s" "s" "s" "s" "s" "s" "s" "s" "s" "s" "s" "s" "i"
[55] "s" "i" "i" "s" "s" "i" "s" "s" "s" "s" "s" "s" "s" "i" "s" "i" "s" "s"
[73] "s" "s" "i" "s" "s" "s" "s" "i" "s" "i" "s" "s" "i" "s" "s" "s" "s" "s"
[91] "s" "s" "s" "s" "s" "s" "s" "s" "s" "s" "s" "s" "s" "i" "s" "i" "s" "s"
[109] "s" "s" "s" "s" "s" "s" "s" "s" "i" "s" "s" "s" "s" "s" "i" "s" "s" "i"
[127] "s" "s" "i" "s" "s" "i" "s" "s" "s" "s" "s" "s" "s" "s" "s" "s" "s" "s"
[145] "i" "s" "s" "s" "i" "s" "s" "s" "s" "s" "s" "s" "i" "s" "s" "i" "s" "s"
[163] "s" "s" "s" "s" "i" "s" "i" "s" "i" "s" "s" "s" "s" "s" "s" "s" "s" "s"
[181] "i" "s" "s" "s" "s" "s" "s" "i" "i" "s" "s" "s" "s" "s" "s" "s" "i" "s"
[199] "s" "s" "s" "s" "s" "s" "s" "i" "s" "s" "s" "i" "i" "s" "i" "i" "s" "s"
[217] "s" "s" "s" "s" "i" "i" "i" "s" "s" "i" "s" "i" "s" "s" "s" "s" "s" "i"
[235] "i" "s" "s" "s" "s" "i" "s" "s" "s" "s" "s" "s" "s" "s" "s" "s" "s" "s"
[253] "s" "s" "i" "s" "s" "s" "s" "s" "s" "i" "s" "s" "i" "s" "s" "s" "s" "s"
[271] "s" "s" "s" "s" "s" "s" "s" "s" "s" "s" "s" "s" "s" "s" "i" "s" "s" "s"
[289] "s" "s" "s" "i" "s" "s" "s" "i" "s" "i" "i" "s" "s" "i" "s" "i" "s" "i"
[307] "s" "s" "s" "i" "s" "i" "s" "i" "s" "s" "s" "i" "s" "s" "s" "s" "s" "i"
[325] "s" "i" "s" "i" "s" "s" "s" "i" "s" "s" "s" "s" "s" "s" "s" "s" "s" "s"
[343] "s" "s" "s" "s" "i" "s" "s" "i" "s" "s" "i" "i" "s" "s" "s" "s" "s" "s"
[361] "i" "s" "s" "s" "s" "i" "s" "s" "i" "s" "i" "s" "s" "s" "s" "i" "s" "i"
[379] "i" "s" "s" "s" "s" "s" "s" "s" "s" "s" "i" "s" "s" "s" "s" "s" "i" "s"
[397] "s" "i" "i" "s" "s" "i" "s" "s" "i" "i" "s" "s" "i" "i" "i" "s" "s" "s"
[415] "s" "s" "s" "s" "s" "i" "s" "s" "s" "s" "i" "s" "i" "s" "s" "s" "s" "s"
[433] "i" "s" "s" "s" "s" "s" "s" "s" "s" "s" "s" "s" "s" "s" "s" "i" "s" "i"
[451] "s" "i" "s" "s" "s" "i" "s" "s" "s" "s" "s" "s" "s" "s" "i" "s" "s" "s"
[469] "s" "s" "i" "s" "s" "s" "s" "s" "s" "s" "s" "s" "s" "s" "s" "s" "s" "s"
[487] "s" "s" "s" "s" "s" "i" "s" "i" "s" "i" "i" "s" "s" "s"
A nodefactor term will allow the mean degree of the infected persons to differ from that of susceptible persons. In the previous tutorial, we calculated the nodefactor target statistic as the mean
degree of a group times the size of the group. Another way of expressing that is a count of the number of times a member of a group, here an infected person or susceptible person, show up in an edge.
And this is just the the group mean degree times the group size; here, the group size is defined by the disease prevalence, prev, variable specified above.
The number of edges then is the sum of these two nodefactor statistics divided by 2 (because nodefactor is an expression of nodes).
Next, let’s move on to parameterizing mixing by disease status. In this theoretically parameterized model, here again it is helpful to think about this in terms of what the expected statistic would
be in the absence of any preferential or assortative mixing. That is, what is the value of the nodematch target statistic under a proportional (random) mixing model? We worked this out in the
previous tutorial for the case where there were equally sized groups with the same mean degree. It gets more complicated here because the groups are of different sizes, and different activity levels.
There are different ways to find this solution.
The exact solution: from population genetics, the Hardy-Weinberg Principle. This code below fills out a two-by-two table that contains the probabilities of a S-S, I-I, and S-I pair. The proportion of
partnerships that are matched on status is the diagonal of the table, which is the sum of the S-S and I-I probabilities.
A simulation-based solution: we estimate an ERGM in which there is no nodematch term but heterogeneity in activity, then simulate from that fitted model to calculate the expected number of matched
edges. The proportion of matched edges is that number over the total edges.
fit <- netest(nw,
formation = ~edges + nodefactor("status"),
target.stats = c(edges, inedges.sus),
coef.diss = dissolution_coefs(~offset(edges), duration = 1))
sim <- netdx(fit, dynamic = FALSE, nsims = 1e4,
nwstats.formula = ~edges + nodematch("status"))
Network Diagnostics
- Simulating 10000 networks
- Calculating formation statistics
sim edges nodefactor.status.s nodematch.status
The main substantive finding here is that 84% of relations are expected to be within the same disease status group under a proportional mixing group, even without a behavioral preference for
assortative mixing. This is because suceptible persons are a larger group in the population, and have more relationships per-capita.
Let us now imagine that there are in fact slightly more within-group ties than expected by chance:
31.1.2 Estimation
This tutorial will compare two models to assess the impact of behavior driven by disease status on epidemic trajectories:
• Model 1 features two forms of seroadaptive behavior: different levels of activity by disease status, and matching on status.
• Model 2 has neither, so is just a Bernoulli model with same mean degree as Model 1.
31.1.2.1 Model 1
The network model with serosorting will include terms for both the number of times susceptible persons show up in an edge, as well as the number of node-matched edges overall. It is unnecessary to
specify the nodefactor term for the susceptible persons because that is a function of total edges and the nodefactor term for the infected persons (the base level is always the first in alphabetical
or numerical order, so “i” for our status variable level in this case). The average partnership duration in both models will be 50 time steps.
formation <- ~edges + nodefactor("status") + nodematch("status")
target.stats <- c(edges, inedges.sus, nmatch)
coef.diss <- dissolution_coefs(dissolution = ~offset(edges), 50)
est <- netest(nw, formation, target.stats, coef.diss)
Warning: 'glpk' selected as the solver, but package 'Rglpk' is not available;
falling back to 'lpSolveAPI'. This should be fine unless the sample size and/or
the number of parameters is very big.
We run the network diagnostics on the model.
dx <- netdx(est, nsims = 10, nsteps = 500, ncores = 5,
nwstats.formula = ~edges +
meandeg +
nodefactor("status", levels = NULL) +
nodematch("status"), verbose = FALSE)
EpiModel Network Diagnostics
Diagnostic Method: Dynamic
Simulations: 10
Time Steps per Sim: 500
Formation Diagnostics
Target Sim Mean Pct Diff Sim SE Z Score SD(Sim Means)
edges 175.00 172.276 -1.557 1.427 -1.909 4.784
meandeg NA 0.689 NA 0.006 NA 0.019
nodefactor.status.i NA 29.462 NA 0.613 NA 2.471
nodefactor.status.s 320.00 315.090 -1.534 2.765 -1.776 7.881
nodematch.status 159.25 157.182 -1.298 1.401 -1.476 3.417
edges 12.772
meandeg 0.051
nodefactor.status.i 6.185
nodefactor.status.s 24.291
nodematch.status 11.929
Duration Diagnostics
Target Sim Mean Pct Diff Sim SE Z Score SD(Sim Means) SD(Statistic)
edges 50 49.924 -0.152 0.388 -0.195 1.804 3.756
Dissolution Diagnostics
Target Sim Mean Pct Diff Sim SE Z Score SD(Sim Means) SD(Statistic)
edges 0.02 0.02 0.709 0 0.943 0 0.011
31.1.2.2 Model 2
The second model will include only an edges term, so no serosorting behavior but same amount of activity among nodes. We are recycling the nw object with the status attribute set from above.
After estimation, the diagnostics here look fine too. Compare the nodefactor and nodematch simulations here to those in the Model 1 diagnostics: more edges for infected persons, and more discordant
(S-I) edges.
dx2 <- netdx(est2, nsims = 10, nsteps = 1000, ncores = 5,
nwstats.formula = ~edges +
meandeg +
nodefactor("status", levels = NULL) +
nodematch("status"), verbose = FALSE)
EpiModel Network Diagnostics
Diagnostic Method: Dynamic
Simulations: 10
Time Steps per Sim: 1000
Formation Diagnostics
Target Sim Mean Pct Diff Sim SE Z Score SD(Sim Means)
edges 175 175.617 0.352 1.236 0.499 2.474
meandeg NA 0.702 NA 0.005 NA 0.010
nodefactor.status.i NA 70.190 NA 0.843 NA 3.443
nodefactor.status.s NA 281.044 NA 2.032 NA 3.089
nodematch.status NA 119.362 NA 0.946 NA 2.013
edges 13.354
meandeg 0.053
nodefactor.status.i 9.383
nodefactor.status.s 22.093
nodematch.status 10.592
Duration Diagnostics
Target Sim Mean Pct Diff Sim SE Z Score SD(Sim Means) SD(Statistic)
edges 50 49.492 -1.017 0.282 -1.803 0.865 3.37
Dissolution Diagnostics
Target Sim Mean Pct Diff Sim SE Z Score SD(Sim Means) SD(Statistic)
edges 0.02 0.02 0.014 0 0.026 0 0.011
31.2 Epidemic Model
For the epidemic model, we will simulate an SI disease, with our two counterfactual network models. The first model will use the serosorting network model, and the second will be the random Bernoulli
31.2.1 Model 1
The model will only include one named parameter, for the transmission probability per contact.
The initial conditions are now passed through to the epidemic model not through init.net as with previous examples, but through the netest object that contains the original starting network with the
vertex attributes for status. However, because netsim will still be expecting an initial conditions list, we need to create an empty object as a placeholder.
For the control settings, we monitor the same network statistics as we did in the network diagnostics above. The resimulate.networks argument must be set to TRUE because the network needs to be
updated at each time step. In this model, we will use the tergmLite approach since we do not need to monitor individual-level network histories.
Here, we run the model. We will return to it to compare output later.
31.2.2 Model 2
The second model includes the same epidemic parameters as Model 1, so all that we need to change is the first parameter for the different network object.
31.2.3 Results
Here we’ll look at the epidemic results first and then the network diagnostics, because the network statistics will be a function of the epidemiology.
Here we see that the serosorting model grows much less quickly than the random model. The reasons are that nodes lower their mean degree upon infection, and therefore have fewer partners over time.
They also tend to preferentially partner with other infecteds.
par(mfrow = c(1,2))
plot(sim, main = "Seroadaptive Behavior")
plot(sim2, main = "No Seroadaptive Behavior")
Another method to show the relative difference is to plot the two epidemic trajectories together. This requires a custom legend.
par(mfrow = c(1,1))
plot(sim, y = "i.num", popfrac = TRUE, sim.lines = FALSE, qnts = 1)
plot(sim2, y = "i.num", popfrac = TRUE, sim.lines = FALSE, qnts = 1,
mean.col = 2, qnts.col = 2, add = TRUE)
legend("topleft", c("Serosort", "Non-Serosort"), lty = 1, lwd = 3,
col = c(4, 2), cex = 0.9, bty = "n")
Now, here are the network statistics that we monitored in Model 1. In contrast to our previous tutorials, where we expected stochastic variation around the targets to be preserved over time, here
every network statistic varies.
Notice first that the nodefactor statistics are moving in opposite directions: as the prevalence of disease increases from 20% to approximately 50% at time 500, the number of infected nodes in an
edge increases. The quantity preserved is not the number of infected nodes in an edge that we set as a target (30 nodes); rather, it is the log-odds of an infected node in an edge conditional on
other terms in the model.
Second, note that the total edges and node-matched edges decline over time. This plot shows those two statistics from one simulation only for clarity. Ordinarily the mean degree is preserved even in
a population with drastically changing size, due to the network density correction that automatically occurs in netsim.
But in this case, the edges and mean degree are shrinking because the number of nodes in the network with the same status is steadily increasing, which draws the node match statistic lower since
there are fewer susceptible-susceptible pairs available. Since the edges is a constant multiple of node-matched edges, they move together.
After looking at so many models where the target statistics were preserved throughout the simulation, this may seem like odd or undesirable behavior here. If it does, think about the main assumption
we began with: men who are infected tend to have lower rates of partnering than men who are not infected. Perhaps this is because they are ill, or because they are trying to protect others. If this
is the case, then what should we expect to happen to the overall rate of partnering as the proportion of men who are infected increases? Presumably it should go down, which is precisely what we see.
This behavior does mean that it can be difficut to tease apart the different contributions of serosorting to the disease dynamics that we see. How much of serosorting’s effect on lowering
transmission came from the rate of overall partner reductions for infected men, and how much came from the homophily effect where infected men partner with each other relatively more? Although we
included both in our model (through the nodefactor and nodematch terms, respectively), one could easily consider intermediate models that only included one term or the other, and then compare the
results of each to the two models we already considered. We would then have a sense of the individual effects of each of these behavioral components, as well as their combined effect. Is the latter
additive, less than additive, or more than additive (i.e. synergistic?)
|
{"url":"https://epimodel.github.io/epimodel-training/network_models_with_feedback/serosorting.html","timestamp":"2024-11-08T04:39:30Z","content_type":"application/xhtml+xml","content_length":"88408","record_id":"<urn:uuid:e4c1c3ec-4c6e-4d51-b513-75c3a50b8898>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00185.warc.gz"}
|
genlasso-package: Package to compute the solution path of generalized lasso... in glmgen/genlasso: Path Algorithm for Generalized Lasso Problems
This package is centered around computing the solution path of the generalized lasso problem, which minimizes the criterion
The solution path is computed by solving the equivalent Lagrange dual problem. The dimension of the dual variable u is the number of rows of the penalty matrix D, and the primal (original) and dual
solutions are related by
for a full column rank predictor matrix X. For column rank deficient matrices X, the solution path is not unique and not computed by this package. However, one can add a small ridge penalty to the
above criterion, which can be re-expressed as a generalized lasso problem with full column rank predictor matrix X and hence yields a unique solution path.
Important use cases include the fused lasso, where D is the oriented incidence matrix of some underlying graph (the orientations being arbitrary), and trend filtering, where D is the discrete
difference operator of any given order k.
The general function genlasso computes a solution path for any penalty matrix D and full column rank predictor matrix X (adding a ridge penalty when X is rank deficient). For the fused lasso and
trend filtering problems, the specialty functions fusedlasso and trendfilter should be used as they deliver a significant increase in speed and numerical stability.
For a walk-through of using the package for statistical modelling see the included package vignette; for the appropriate background material see the generalized lasso paper referenced below.
Tibshirani, R. J. and Taylor, J. (2011), "The solution path of the generalized lasso", Annals of Statistics 39 (3) 1335–1371.
Arnold, T. B. and Tibshirani, R. J. (2014), "Efficient implementations of the generalized lasso dual path algorithm", arXiv: 1405.3222.
For more information on customizing the embed code, read Embedding Snippets.
|
{"url":"https://rdrr.io/github/glmgen/genlasso/man/genlasso-package.html","timestamp":"2024-11-12T20:44:30Z","content_type":"text/html","content_length":"24813","record_id":"<urn:uuid:cb1e68d1-063f-4439-aaa5-1eae445048f7>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00028.warc.gz"}
|
projection practice sheet -2nd set
Download projection practice sheet -2nd set
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Document related concepts
no text concepts found
Engineering Drawing
Projection Part 2 Practice Sheet
By Soumyodeep Mukherjee
A solid has three dimensions, the length, breadth and thickness or height. A solid may be represented
by orthographic views, the number of which depends on the type of solid and its orientation with
respect to the planes of projection. solids are classified into two major groups. (i) Polyhedral, and
(ii) Solids of revolution
A polyhedral is defined as a solid bounded by plane surfaces called faces. They are:
(i)Regular polyhedral (ii) Prisms and (iii) Pyramids
Regular Polyhedral
A polyhedron is said to be regular if its surfaces are regular polygons. The following are
some of the regular polyhedral.
Prisms: A prism is a polyhedron having two equal ends called the bases parallel to each other. The
two bases are joined by faces, which are rectangular in shape. The imaginary line passing through the
centers of the bases is called the axis of the prism.
A prism is named after the shape of its base. For example, a prism with square base is called a square
prism, the one with a pentagonal base is called a pentagonal prism, and so on (Fig) The nomenclature
of the prism is given in Fig.
Figure 3.18
(a) Tetrahedron: It consists of four equal faces, each one being a equilateral triangle.
(b) Hexa hedron(cube): It consists of six equal faces, each a square.
(c) Octahedron: It has eight equal faces, each an equilateral triangle.
(d) Dodecahedron: It has twelve regular and equal pentagonal faces.
(e) Icosahedrons: It has twenty equal, equilateral triangular faces.
Pyramids: A pyramid is a polyhedron having one base, with a number of isosceles triangular faces,
meeting at a point called the apex. The imaginary line passing through the centre of the base and the
apex is called the axis of the pyramid.
The pyramid is named after the shape of the base. Thus, a square pyramid has a square base and
pentagonal pyramid has pentagonal base and so on. The nomenclature of a pyramid is shown in Fig.
Figure 3.19
Types of Pyramids:
There are many types of Pyramids, and they are named after the shape of their base.
These are Triangular Pyramid, Square Pyramid, Pentagonal pyramid, hexagonal pyramid and
Solids of Revolution: If a plane surface is revolved about one of its edges, the solid generated is
called a solid of revolution. The examples are (i) Cylinder, (ii) Cone, (iii) Sphere.
Frustums and Truncated Solids: If a cone or pyramid is cut by a section plane parallel to its
base and the portion containing the apex or vertex is removed, the remaining portion is called
frustum of a cone or pyramid
Prisms Position of a Solid with Respect to the Reference Planes: The position of solid in
space may be specified by the location of either the axis, base, edge, diagonal or face with the
principal planes of projection. The following are the positions of a solid considered.
1. Axis perpendicular to HP
2. Axis perpendicular to VP
3. Axis parallel to both the HP and VP
4. Axis inclined to HP and parallel to VP
5. Axis inclined to VP and parallel to HP
6. Axis inclined to both the Planes (VP. and HP)
The position of solid with reference to the principal planes may also be grouped as follows:
1. Solid resting on its base.
2. Solid resting on anyone of its faces, edges of faces, edges of base, generators, slant edges, etc.
3. Solid suspended freely from one of its corners, etc.
1. Axis perpendicular to one of the principal planes:
When the axis of a solid is perpendicular to one of the planes, it is parallel to the other. Also, the
projection of the solid on that plane will show the true shape of the base.
When the axis of a solid is perpendicular to H.P, the top view must be drawn first and then the front
view is projected from it. Similarly when the axis of the solid is perpendicular to V.P, the front view
must be drawn first and then the top view is projected from it.
Figure 3.20
Simple Problems:
When the axis of solid is perpendicular to one of the planes, it is parallel to the other. Also, the
projection of the solid on that plane will show the true shape of the base. When the axis of a solid is
perpendicular to H.P, the top view must be drawn first and then the front view is projected from it.
Similarly when the axis of the solid is perpendicular to V.P, the front view must be drawn first and
then the top view is projected from it.
1. Axis perpendicular to HP
A Square Pyramid, having base with a 40 mm side and 60mm axis is resting on its base on the HP.
Draw its Projections when (a) a side of the base is parallel to the VP. (b) A side of the base is
inclined at 300 to the VP and (c) All the sides of base are equally inclined to the VP.
Figure 3.21
2. Axis perpendicular to VP
A pentagonal Prism having a base with 30 mm side and 60mm long Axis, has one of It’s bases in
the VP. Draw Its projections When (a)rectangular face is parallel to and 15 mm above the HP (b)
A rectangular face perpendicular to HP and (c) a rectangular face is inclined at 45 0 to the HP
Figure 3.22
3. Axis parallel to both the HP and VP
A pentagonal Prism having a base with a 30 mm side and 60mm long axis, is resting on one of its
rectangular faces on the HP. with axis parallel to the VP. Draw its projections?
Figure 3.23
4. Axis inclined to HP and parallel to VP
A Hexagonal Prism having a base with a30 mm side and 75 mm long axis, has an edge its base on
the HP. Its axis is Parallel to the VP and inclined at 45 0 to the HP Draw its projections?
Figure 3.24
5. Axis inclined to VP and parallel to HP
An Hexagonal Prism, having a base with a 30 mm side and 65 mm long axis, has an edge it’s
base in the VP Such that the axis is inclined at 300 to the VP and Parallel to the HP. Draw its
Figure 3.25
6. Axis inclined to both the principal planes (HP and VP)
A solid is said to be inclined to both the planes when (i) the axis is inclined to both the planes,
(ii) the axis is inclined to one plane and an edge of the base is inclined to the other. In this
case the projections are obtained in three stages.
Stage I: Assume that the axis is perpendicular to one of the planes and draw the projections.
Stage II: Rotate one of the projections till the axis is inclined at the given angle and project the other
view from it.
Stage III: Rotate one of the projections obtained in Stage II, satisfying the remaining condition and
project the other view from it.
A cube of 50 mm long edges is so placed on HP on one corner that a body diagonal is
Parallel to HP and perpendicular to VP. Draw it’s projections.
Solution Steps:
1. Assuming standing on HP, begin with TV, a square with all sides
equally inclined to xy .Project Fv and name all points of FV & TV.
2. Draw a body-diagonal joining c’ with 3’ (This can become Parallel to xy)
3. From 1’ drop a perpendicular on this and name it p’
4. Draw 2nd Fv in which 1’-p’ line is vertical means c’-3’ diagonal
must be horizontal. .Now as usual project TV..
5. In final TV draw same diagonal is perpendicular to VP as said in problem. Then as usual project
final FV.
Figure 3.26
A cone 40 mm diameter and 50 mm axis is resting on one of its generator on HP which makes 30 0
inclinations with VP. Draw it’s projections?
Solution Steps:
Resting on HP on one generator, means lying on HP
1. Assume it standing on HP.
2. It’s TV will show True Shape of base( circle )
3. Draw 40mm dia. Circle as TV& taking 50 mm axis project FV. (a triangle)
4. Name all points as shown in illustration.
5. Draw 2nd FV in lying position I.e. o’e’ on xy. And project it’s TV below xy.
6. Make visible lines dark and hidden dotted, as per the procedure.
7. Then construct remaining inclination with VP (generator o 1e1 300 to xy as shown) & project final
Figure 3.27
|
{"url":"https://studyres.com/doc/24474238/projection-practice-sheet--2nd-set","timestamp":"2024-11-05T23:48:40Z","content_type":"text/html","content_length":"64170","record_id":"<urn:uuid:97d9112e-d2d0-495f-9233-53b41e9ed70c>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00649.warc.gz"}
|
What Is Net Present Value (NPV) in Real Estate Investing?
• Posts
• What Is Net Present Value (NPV) in Real Estate Investing?
What Is Net Present Value (NPV) in Real Estate Investing?
What Is Net Present Value (NPV) in Real Estate Investing?
Net Present Value (NPV) is a financial concept widely used in real estate investing to evaluate the profitability of an investment opportunity. It helps investors determine the value of future cash
flows by discounting them to their present value.
By calculating the NPV of a real estate investment, investors can assess whether the investment is financially viable and compare it to alternative investment options.
Calculating NPV in Real Estate Investing
Calculating the NPV of a real estate investment involves several steps. Let’s break down the process:
1. Estimating Future Cash Flows
The first step in calculating NPV is to estimate the future cash flows expected from the real estate investment. This includes rental income, potential appreciation, tax benefits, and any other
income or expenses associated with the property over the investment period. It is important to be realistic and consider various scenarios and potential risks.
2. Determining the Discount Rate
The discount rate is the rate of return required by the investor to compensate for the risk and time value of money. It reflects the opportunity cost of investing in a particular real estate project
instead of an alternative investment. The discount rate should be based on the investor’s required rate of return and the level of risk associated with the investment.
3. Discounting Future Cash Flows
Once the future cash flows and discount rate are determined, the next step is to discount the future cash flows to their present value. This involves applying the discount rate to each cash flow and
adjusting it for the time value of money. The formula used for discounting cash flows is:
Present Value = Future Cash Flow / (1 + Discount Rate)^n
• Present Value is the value of the cash flow in today’s dollars
• Future Cash Flow is the expected cash flow in a future period
• Discount Rate is the required rate of return
• n is the number of periods in the future
4. Summing the Present Values
After discounting each future cash flow, the next step is to sum the present values to calculate the net present value. The formula for calculating NPV is:
NPV = Sum of Present Values – Initial Investment
If the NPV is positive, it indicates that the investment is expected to generate a return higher than the required rate of return. A negative NPV suggests that the investment may not be financially
Interpreting NPV in Real Estate Investing
Interpreting the NPV of a real estate investment requires considering the context and specific circumstances of the investment. Here are some key points to consider:
1. Positive NPV
A positive NPV indicates that the investment is expected to generate a return higher than the required rate of return. This suggests that the investment is financially viable and may be a good
opportunity to pursue. However, it is essential to conduct a thorough analysis of the investment’s risks, market conditions, and potential future cash flows before making a final decision.
2. Negative NPV
A negative NPV suggests that the investment may not meet the investor’s required rate of return. It could indicate that the investment is too risky, the expected cash flows are insufficient, or the
initial investment is too high. In such cases, it may be wise to reconsider the investment or explore alternative opportunities.
3. Comparing NPV of Different Investments
NPV can also be used to compare multiple investment opportunities. By calculating the NPV of different investments using the same discount rate, investors can evaluate which investment is expected to
generate the highest return. However, it is crucial to consider other factors such as risk, market conditions, and long-term potential.
Overall, understanding and calculating the NPV of a real estate investment is essential for making informed investment decisions. It helps investors evaluate the financial viability of an investment
opportunity, compare different options, and assess the potential return on investment.
However, it is important to remember that NPV is just one tool among many in the investor’s toolkit and should be used in conjunction with other financial and qualitative analyses.
|
{"url":"https://webuyrealestateinc.com/p/what-is-net-present-value-npv-in-real-estate-investing","timestamp":"2024-11-06T08:54:44Z","content_type":"text/html","content_length":"170101","record_id":"<urn:uuid:22f4519a-732d-4ac8-95a0-691bac0809b5>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00239.warc.gz"}
|
The IRN intends to gather four teams:
• Geometry (algebraic and analytic)
• Number theory (arithmetic and geometric)
• PDE (kinetic theory and collective dynamics)
• Probability
The scientific program of the team in Geometry:
The scientific program of the team in Number Theory:
The scientific program of the team in PDE:
The scientific program of the team in Probability:
|
{"url":"https://www.math.u-bordeaux.fr/~pthieull/LIA/documents.html","timestamp":"2024-11-13T15:58:13Z","content_type":"application/xhtml+xml","content_length":"5604","record_id":"<urn:uuid:624e9ebd-e18d-4f22-ab32-fb4deb8026d7>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00762.warc.gz"}
|
Is package width measure left to right or front to back? - Answers
How do you measure an office chair?
To measure an office chair, typically you would measure the height, width, and depth of the chair. The height is measured from the floor to the top of the backrest, the width is measured at the
widest point of the chair, and the depth is measured from the front of the seat to the backrest.
|
{"url":"https://sports.answers.com/team-sports/Is_package_width_measure_left_to_right_or_front_to_back","timestamp":"2024-11-08T12:50:23Z","content_type":"text/html","content_length":"157272","record_id":"<urn:uuid:15e80d54-6f33-4588-90ce-cf49d3a19d89>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00258.warc.gz"}
|
Let Z Denote A Random Variable That Has A Standard Normal Distribution. Determine Each Of The Probabilities (2024)
The inverse element is preserved, i.e. for any element (g, a) in the set, there exists an inverse element (g−1, a−1) such that (g, a) (g−1, a−1) = (1, 1) for the matrices.
To show that the following maps define a group action, we need to prove that the elements in the set are homomorphisms, i.e. that the action of a group element can be defined by multiplying the
original element by another element in the group (by means of multiplication) for the matrices.
Let's examine each of the given sets in detail:(1) GL(n, R) × Matn(R) - Matn(R) defined as (A, X) → XA−1:To prove that this map defines a group action, we need to verify that the following properties
are satisfied:The action is well-defined, i.e. given any two pairs (A, X) and (B, Y) in the set, we can show that (B, Y) (A, X) = (BA, YX) ∈ Matn(R). The identity element is preserved, i.e. given a
matrix X ∈ Matn(R), the element (I, X) will be mapped to X.
The action is associative, i.e. given a matrix X ∈ Matn(R) and group elements A, B, C ∈ GL(n, R), the following equality will hold: [(A, X) (B, X)] (C, X) = (A, X) [(B, X) (C, X)]. The inverse
element is preserved, i.e. for any element (A, X) in the set, there exists an inverse element (A−1, XA−1) such that (A, X) (A−1, XA−1) = (I, X).(2) (GL(n, R) × GL(n, R)) × Matr(R) -› Matn(R) defined
as ((A, B), X) → AXB−1:Let's again verify the following properties for this map to define a group action: The action is well-defined, i.e. given any two pairs ((A, B), X) and ((C, D), Y), we can show
that ((C, D), Y) ((A, B), X) = ((C, D) (A, B), YX) ∈ Matn(R). The identity element is preserved, i.e. given a matrix X ∈ Matn(R), the element ((I, I), X) will be mapped to X. The action is
associative, i.e. given a matrix X ∈ Matn(R) and group elements (A, B), (C, D), E ∈ GL(n, R), the following equality will hold: [((A, B), X) ((C, D), Y)] ((E, F), Z) = ((A, B), X) [((C, D), Y) ((E,
F), Z)].
The inverse element is preserved, i.e. for any element ((A, B), X) in the set, there exists an inverse element ((A−1, B−1), AXB−1) such that ((A, B), X) ((A−1, B−1), AXB−1) = ((I, I), X).(3) R × R2 →
R2 defined as (r, (x, y)) → (x + r4, y):Again, let's check the following properties to show that this map defines a group action: The action is well-defined, i.e. given any two pairs (r, (x, y)) and
(s, (u, v)), we can show that (s, (u, v)) (r, (x, y)) = (s + r, (u + x4, v + y)) ∈ R2.
The identity element is preserved, i.e. given an element (x, y) ∈ R2, the element (0, (x, y)) will be mapped to (x, y). The action is associative, i.e. given an element (x, y) ∈ R2 and group elements
r, s, t ∈ R, the following equality will hold: [(r, (x, y)) (s, (x, y))] (t, (x, y)) = (r, (x, y)) [(s, (x, y)) (t, (x, y))]. The inverse element is preserved, i.e. for any element (r, (x, y)) in the
set, there exists an inverse element (-r, (-x4, -y)) such that (r, (x, y)) (-r, (-x4, -y)) = (0, (x, y)).(4) FX × F → F defined as (g, a) → ga, where F is a field, and FX = (F \ {0},) is the
multiplicative group of nonzero elements in F:To show that this map defines a group action, we need to verify that the following properties are satisfied:The action is well-defined, i.e. given any
two pairs (g, a) and (h, b), we can show that (g, a) (h, b) = (gh, ab) ∈ F.
The identity element is preserved, i.e. given an element a ∈ F, the element (1, a) will be mapped to a. The action is associative, i.e. given elements a, b, c ∈ F and group elements g, h, k ∈ FX, the
following equality will hold: [(g, a) (h, b)] (k, c) = (g, a) [(h, b) (k, c)]. The inverse element is preserved, i.e. for any element (g, a) in the set, there exists an inverse element (g−1, a−1)
such that (g, a) (g−1, a−1) = (1, 1).
Learn more about matrices here:
|
{"url":"https://enketr.shop/article/let-z-denote-a-random-variable-that-has-a-standard-normal-distribution-determine-each-of-the-probabilities","timestamp":"2024-11-04T23:27:58Z","content_type":"text/html","content_length":"63320","record_id":"<urn:uuid:51a5f8ec-fdab-4e4b-b0d7-59c77336a50b>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00238.warc.gz"}
|
Inquiry Maths
Inquiry Maths is a model of learning that encourages students to regulate their own activity while exploring a mathematical prompt (an equation, a statement or a diagram). Inquiries involve
questioning, conjecturing, generalising and proving and, when required, listening to or giving an explanation.
In Inquiry Maths lessons, students learn to take responsibility for directing the inquiry while the teacher acts as the arbiter of legitimate mathematical knowledge and activity.
Inquiry Maths establishes a culture of curiosity, collaboration and openness in the classroom. Students meet new concepts and procedures when they are necessary, meaningful and connected – necessary
to make progress on a line of inquiry, meaningful in the context of the inquiry, and connected to other concepts and procedures in the field of inquiry.
Diann King reports on the creativity of her year 7 class during an inquiry into the product of factors prompt.
November 2024
Andrew Blair addresses the dilemma over when or when not to tell students what to do in the inquiry classroom.
November 2024
Helen Hindle has created slides with structured lines of inquiry into the product of factors prompt she devised.
November 2024
Most viewed inquiries in October
The Mathematics Hub funded by the Australian Government Department of Education, Skills and Employment recommends 50 inquiries from the Inquiry Maths website.
What students say about Inquiry Maths
Year 11 students evaluate an Inquiry Maths lesson on mini-whiteboards.
After supporting in a year 7 inquiry lesson on the combined transformations prompt, Husine, a year 12 Further Maths A-level student, commented:
"When we were in year 7 we just learnt how to do things, like reflections, without thinking about it. In the inquiry you dig into why things are the way they are.
"I was really surprised by the connections between the transformations. The inquiry was much better because the year 7s were really interested."
"My maths this week using a prompt has improved significantly because I have noticed and seen things I didn't even know was possible." Year 6
“Doing Inquiry Maths was the best maths lesson I ever had because it taught me how to think." Year 7
"My independent learning ability has improved. It made me think outside the box!" Year 11
"It was a new technique, a maths prompt. For some reason questions just flew out of me." Year 6
"Having a say in what we do makes me work harder." Year 10
"I ask lots more questions in maths after doing inquiry lessons. Last week I made up my own inquiry on enlargements." Year 8
"I used to just explore a problem, but now I have to stop and decide what to do." Year 10
"The inquiry was so interesting that it was the first time I talked about my maths lessons at home." Year 7
"I feel like I learn a lot more in inquiry lessons and they're more engaging." Year 8
A year 7 student gives written feedback.
What teachers say about Inquiry Maths
"The whole class came together to collaborate on the inquiry. The energy in the room contributed to a brilliant sense of community." Zack Miodownik
"I love the focus in Inquiry Maths on developing mathematical behaviours in students whilst keeping the integrity of the subject." Claire Lee
"I was blown away by the by the depth of student responses." Devon Burger
"It was an absolute joy to teach in this way." Emmy Bennett
"The students’ responses were inspiring, amazing, and truly beyond any of my expectations." Michelle Cole
"I didn’t expect the level of commitment and mathematical language from the class. It was one of the best lessons I have had with them!" Shawki Dayekh
"Some brilliant work from the students. One of those lessons that makes you love being a teacher." Chris McGrane
"An inquiry approach is ideal for a mixed attainment class because it supports and challenges all students and allows them to direct their own learning." Helen Hindle
Dr Andrew Blair, a secondary school teacher in London (UK), created the website in 2012 to promote inquiry learning in mathematics classrooms.
Independent of all companies and organisations, the site is being developed in collaboration with teachers from around the world. Their experiences and reflections enrich its pages.
We welcome feedback from teachers who use Inquiry Maths prompts in the classroom. Contact Inquiry Maths
Andrew Blair developed the first five prompts on the Inquiry Maths website with the assistance of a Teacher Fellowship from the Gatsby Charitable Foundation in 2004-05.
|
{"url":"https://www.inquirymaths.com/","timestamp":"2024-11-01T22:38:02Z","content_type":"text/html","content_length":"204006","record_id":"<urn:uuid:738653d9-bd20-4532-9753-874d5ec0cd60>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00341.warc.gz"}
|
[S:Deleted text in red:S] / Inserted text in green
It's been known since before Pythagoras that if the lengths of the sides of a right-angled triangle are a, b and c,
with c the longest, then a, b and c satisfy the equation EQN:a^2+b^2=c^2.
[[[>50 This is a simple example _ of a Diophantine equation. ]]]
That equation has many solutions in which a, b and c which are integers. For example:
* a=5, b=12 and c=13
* a=7, b=24 and c=25
* a=8, b=15 and c=17
Fermat made the hypothesis that EQN:a^n+b^n=c^n has no integer solutions when n > 2.
In 1637 he wrote, in his copy of Claude-Gaspar Bachet's translation of the famous Arithmetica of Diophantus,
"I have a truly marvellous proof of this proposition which this margin is too narrow to contain."
(Original Latin: !/ "Cuius rei demonstrationem mirabilem sane detexi. Hanc marginis exiguitas non caperet." [S:!):S] !/ )
He omitted to write it down anywhere else.
For over 350 years many [S:Mathematicians,:S] mathematicians, despite much effort, failed to produce a correct proof until the
British [S:Mathematician,:S] mathematician Andrew Wiles published a proof in 1995. He was later knighted for his effort.
It is now generally believed that [S:Fermat:S] Fermat's proof had a [S:proof, but:S] flaw, which he discovered [S:a flaw.:S] later. There is a flawed proof in
which the flaw is to use something which at the time was generally thought to be true, but was later
shown to be false. This is, of course, sheer speculation, but it is mildly satisfying.
|
{"url":"https://www.livmathssoc.org.uk/cgi-bin/sews_diff.py?FermatsLastTheorem","timestamp":"2024-11-08T01:52:43Z","content_type":"text/html","content_length":"3008","record_id":"<urn:uuid:2d7ae93f-f3e3-4d73-9c93-db99f653006c>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00211.warc.gz"}
|
Algebra: Solving Quadratics by Factoring
1. / Algebra: Solving Quadratics by Factoring
Algebra: Solving Quadratics by Factoring
Solving Quadratics by Factoring
If you can transform an equation into a factorable quadratic polynomial, it is very simple to solve. Even though this technique will not work for all quadratic equations, when it does, it is by far
the quickest and simplest way to get an answer. Therefore, unless a problem specifically tells you to use another technique to solve a quadratic equation, you should try this one first. If you get a
prime (unfactorable) polynomial, you can always shift to one of the other techniques you'll learn later in this section.
To solve a quadratic equation by factoring, follow these steps:
How'd You Do That?
If you have the equation (x - a)(x - b) = 0, Step 3 tells you to change that into the twin equations:
x - a = 0 or x - b = 0
Are you wondering why that's allowed? It's thanks to something called the zero product property.
Think about it this way: If two things are multiplied together in this case the quantities (x - a) and (x - b) and the result is 0, then at least one of those two quantities actually has to be
equal to 0! There's no way to multiply two or more things to get 0 unless at least one of them equals 0.
1. Set the equation equal to 0. Move all of the terms to the left side of the equation by adding or subtracting them, as appropriate, leaving only 0 on the right side of the equation.
2. Factor the polynomial completely. Use one of the techniques you learned in Factoring Polynomials to factor; remember to always factor out the greatest common factor first.
3. Set each of the factors equal to 0. You're basically creating a bunch of tiny, little equations whose left sides are the factors and whose right sides are each 0. It's good form to separate these
little equations with the word "or," because any one of them could be true.
4. Solve the smaller equations and check your answers. Each of the solutions to the tiny, little equations is also a solution to the original equation. However, to make sure they actually work, you
should plug them back into that original equation to verify that you get true statements.
The hardest part of this technique is actually the factoring itself, and since that's not a new concept, this procedure is very simple and straightforward.
Example 1: Solve the equations, and give all possible solutions.
• (a) x^2 - 6x + 9 = 0
• Solution: Since this equation is already set equal to 0, start by factoring the left side.
• (x - 3)(x - 3) = 0
• Now set each factor equal to 0.
• x - 3 = 0 or x - 3 = 0
• x = 3 or x = 3
• Well, since both factors were the same, both solutions ended up equal, so the equation x^2 - 6x + 9 = 0 only has one valid solution, x = 3. When you get an answer like this, which appears as a
possible solution twice, it has a special name it's called a double root.
Critical Point
A double root is a repeated solution for a polynomial equation; it's the result of a repeated factor in the polynomial.
You've Got Problems
Problem 1: Give all the solutions to the equation 4x^3 = 25x.
• Check to make sure that 3 is a valid answer by plugging it back into the original equation.
• x^2 - 6x + 9 = 0
• 3^2 - 6(3) + 9 = 0
• 9 - 18 + 9 = 0
• 0 = 0
• There's no doubt that 0 = 0 is a true statement, so you got the answer right.
• (b) 3x^2 + 10x = -4x + 24
• Solution: Your first job is to set this equal to 0; to accomplish this, add 4x to and subtract 24 from both sides.
• 3x^2 + 14x -24 = 0
• Factor the trinomial using the bomb method discussed in Factoring Polynomials. The two mystery numbers you're looking for are -4 and 18.
• 3x^2 + (-4 + 18)x - 24 = 0 3x^2 -4x + 18x - 24 = 0
• x(3x - 4) + 6(3x - 4) = 0
• (3x - 4)(x + 6) = 0
• Set each factor equal to 0 and solve.
• 3x - 4 = 0 or x + 6 = 0
• x = ^4 [3] or x = -6
• Both of these answers work when you check them.
Excerpted from The Complete Idiot's Guide to Algebra © 2004 by W. Michael Kelley. All rights reserved including the right of reproduction in whole or in part in any form. Used by arrangement with
Alpha Books, a member of Penguin Group (USA) Inc.
You can purchase this book at Amazon.com and Barnes & Noble.
|
{"url":"https://www.factmonster.com/math-science/mathematics/algebra/algebra-solving-quadratics-by-factoring","timestamp":"2024-11-02T09:12:43Z","content_type":"text/html","content_length":"40078","record_id":"<urn:uuid:fa79005d-b338-414c-afb4-7a7af3e4c953>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00762.warc.gz"}
|
Cluster-level sensitivity and specificity wit
1-Stage Freedom analysis
Cluster-level sensitivity and specificity with variable cut-points
This utility estimates the probability of detecting disease (herd or cluster-sensitivity) in a large (infinite) population, if it is present at the specified design prevalence, assuming a test of
known sensitivity and specificity and for a variable cut-point number of positives to determine the test result. These analyses use the method from Martin et el. (1992) (Prev Vet Med, 14:33-43) and
the binomial distribution function, assuming known test sensitivity and test specificity and a variable cut-point number of positives to declare a population infected (i.e. a variable (non-zero)
number of positive positives can be allowed and still be recognised as free). The population is classified as infected if the number of positive results is equal to or greater than the cut-point.
Inputs are:
• Sample size tested;
• Test sensitivity and specificity;
• Design (target) prevalence; and
• The cut-point number of positives.
Outputs are:
• The cluster-level sensitivity (SeH) and specificity (SpH) for the given sample size;
• Test sensitivity and test specificity;
• Design prevalence;
• The cut-point number of positives; and
• Tables and graphs of cluster-level sensitivity and specificity values for a range of cut-point, sample size and design prevalence values.
|
{"url":"https://epitools.ausvet.com.au/herdsensthree","timestamp":"2024-11-13T02:35:45Z","content_type":"text/html","content_length":"44326","record_id":"<urn:uuid:ff3400f6-4186-48b8-99f7-1592b81a0aa9>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00537.warc.gz"}
|
CPM Homework Help
Give the equations of two functions, $f \left(x\right)$ and $g\left(x\right)$, so that $f \left(x\right)$ and $g\left(x\right)$ cross at exactly:
There is more than one answer to each of the following situations. Only check one box at a time and be sure to make up your own equations.
Use the eTool below to view one example of each situation.
Click the link at right for the full version of the eTool: A2C 4-154 HW eTool
|
{"url":"https://homework.cpm.org/category/CON_FOUND/textbook/a2c/chapter/4/lesson/4.3.2/problem/4-154","timestamp":"2024-11-04T02:53:11Z","content_type":"text/html","content_length":"37351","record_id":"<urn:uuid:902db7a6-25ae-4fa0-a032-9670f64406bc>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00709.warc.gz"}
|
This is always a passionate discussion in crypto boards. Many say you’re crazy to mine nowadays, or ever, but some say the return on investment is worth it. What’s the right answer? Well, like
anything, there is no right answer, just what’s right for you. I’ll walk you through my thoughts and why I chose to mine.
First let’s pick a date in time that we can use as a baseline, how about 12 months ago to the day I wrote this. So our baseline date is 1/19/2022, and here are the current prices of equipment and
crypto I’ll use in this analysis:
BTC $42,374.04
LTC $141.89
ETH $3163.85
S19 95TH (approx.) $8,800
L7 9300 (approx.) $19,200
Baseline costs on 01/19/2022
Now we obviously need to have the same costs as of today, 1/19/2023 for comparison purposes:
BTC $20,713.82
LTC $82.87
ETH $1515.73
S19 95TH (approx.) $1,800
L7 9300 (approx.) $9,500
Comparison costs on 01/19/2023
Now obviously we’ve been in a crypto bear market, but that’s not an excuse nor a reason to discount the relevant argument which is what’s better, mining or purchasing crypto? Let’s see how it shook
out over the past 12 months.
Let’s take a few cases and walk through the numbers. I’ll do my best to estimate costs (a lot of it is actual numbers as I run the miners I’m using in the examples) and keep things on the level. I
make several assumptions which may not be your exact case, but I have to make them to compare apples to apples. Also this all assumes that we buy and operate everything on the baseline day until we
sell everything on the comparison end date.
Case example Starting Investment Ending Investment
1 – Purchasing 2 BTC miners $20,000 $6,536
2 – Purchasing an LTC miner $20,000 $18,415
3 – Buying BTC $20,000 $9,776
4 – Buying LTC $20,000 $11,681
5 – Buying ETH $20,000 $9,581
Case summary table
As you can see, in every case we take a loss. What’s interesting is how much less the loss is by selecting the right miner. I’m not advocating to anyone how to spend/invest their money however when I
do chose to invest, this is the methodology that I use. Now how did I come up with the data? You can read the summary of each case below. Like I’ve said before, everyone’s numbers will vary depending
on several factors but this hopefully gives you some insight into one method of how to choose which route to go.
Case #1 – Investing $20,000 primarily into BTC mining
In this case we’ll purchase two S19 95TH for $8,800 each ($17,600 total) and put the final $2,400 into BTC. For starters, over the course of the last year that $2,400 in BTC is now worth about
The $17,600 we spent on the two S19 95TH has yielded us approximately .25 BTC (using a linear regression from mining over the past year) and at today’s price that’s around $5,178. We need to subtract
for electricity however so we’ll use the rate I’m at which is around $0.06 per kwh, which equates to $4.68 a day or $1,708 a year per miner. That’s a total electricity cost of $3,416.
The residual value (just work with me here people) of the miners is approximately $1,800 each ($3,600 total) as of 01/19/2023.
So our end-state for this scenario is:
$1,174 BTC investment
$5,178 BTC mined
$3,600 S19 95TH value
-$3,416 operating costs
Our initial $20,000 investment is now worth $6,536.
Case #2- Investing $20,000 primarily into LTC/DOGE mining
In this case we’ll purchase an L7 9300 for $19,200 and put the final $800 into LTC. For starters, over the course of the last year that $800 in LTC is now worth about $467.
The $19,200 we spent on the L7 has yielded us (using a linear regression from mining over the past year) approximately 131 LTC over the year, at today’s price that’s around $10,856. We need to
subtract for electricity however so we’ll use the rate I’m at which is around $0.06 per kwh, which equates to $4.68 a day or $1,708 a year.
The residual value (just work with me here people) of the miner is approximately $1,800 as of 01/19/2023.
So our end-state for this scenario is:
$467 LTC investment
$10,856 LTC mined
$8,800 L7 value
-$1,708 operating costs
Our initial $20,000 investment is now worth $18,415.
Case #3- Investing $20,000 into BTC
In this case we’ll simply buy $10,000 worth of BTC and see where that takes us. The math is pretty straight forward here, our end-state for this scenario is:
$9,776 BTC investment
So our initial $20,000 investment is now worth $9,776.
Case #4- Investing $20,000 into LTC
In this case we’ll simply buy $20,000 worth of LTC and see where that takes us. The math is pretty straight forward here, our end-state for this scenario is:
$11,681 LTC investment
So our initial $20,000 investment is now worth $11,681.
Case #5- Investing $20,000 into ETH
In this case we’ll simply buy $20,000 worth of ETH and see where that takes us. The math is pretty straight forward here, our end-state for this scenario is:
$9,581 ETH investment
So our initial $20,000 investment is now worth $9,581.
Quick hack to monitor that your L7 (or any miner) is working
I’ve seen a lot of posts lately, and even got an message or two, asking how I remotely monitor my L7. Since (as of 8/12/22) there isn’t any after-market firmware available, nor would I suggest using
any while your system is under warranty, some hacks and work-arounds are needed.
First, I use VNC Server to remote into my miners. This is a great and powerful tool and there is a free version available. I run it on a variety of systems; old Windows laptop, Raspberry Pi, and even
a Chromebook I bought and hacked into Linux.
That solves the remote login issue, but what about alerting you when it’s down? There’s scripts you can write that can verify pings every so often, and that will tell you if it’s alive. But what
about if it’s hashing?
To solve that I look at it inversely, I want my miner to tell me it’s down, but if that’s not easily available, then I want it to tell me it’s working. So how can we do that? It’s not as hard as you
might think, in fact it’s a quick and easy hack. First, this will depend on what pool you mine and what the payout interval is. For example, if you use Litecoinpool you can set a payout threshold to
meet a time interval of roughly 4 hours. For Nicehash it just happens every 4-6 hours. Why is this important you might ask?
Now let’s look at your wallet. Pretty much every wallet on earth allows you to receive a notification when you receive a payment. I have a wallet that I use for mining that I have an alert set for
any incoming payment. So, knowing I have a payment threshold of 0.1 LTC on Litecoin pool, for one L7 I expect a payout roughly every 5 hours. If I don’t receive a text alert every 5-6 hours from my
wallet that a payment was received I can login with VNC and verify that I’m still hashing.
Is it perfect, no, but it works for me and was a quick and easy hack to make sure I don’t have to constantly login to monitor miners. I know there are better hacks, feel free to share, that’s what
this community is all about!
How a cheap knockoff duct fan almost took out my crypto farm
How is that for a headline? Certainly nothing you ever want to read or even think about. As you can imagine, I feel very blessed and lucky that the circuit tripped before anything worse could happen,
but it’s a very good lesson to learn. Mainly, always inspect you equipment, don’t buy cheap unknown knockoffs, and use circuit protection for everything.
So what happened? I was gone for a few days on a work trip so I can’t say the exact time this happened, but I can ball park it since the duct fan killing the circuit shut down the miner as well. A
caveat to that is that the miner only shut down because it has programming to shut it off when it gets too hot, which is another learning point for me.
This happened on an L7 I have and I was using the duct fan as the primary cooling and manually set the fans on the L7 to be at 30% (roughly 3000RPM.) Once the duct fan controller burned out the only
thing cooling the L7 was its own fans, which I had set too low to actually cool it, hence it shutting down. Lesson here is that I had no need to manually dial the L7 fans at 30%, I could have left it
(and will from now on) at automatic and they simply would speed up if the duct fan ever failed. So what if I had removed my onboard fans like several people have mentioned? There’s more of a risk
that you could burn up your hash boards before the software catches them heating beyond threshold (90C I believe on the L7.)
I threw a little video together on my lesson learned and what to watch out for.
How to troubleshoot your L3+ using the kernel log.
We’ve all had miners fail (if you’re here that’s most likely why), it’s frustrating, maddening, but gosh darn-it, people like you. After you’re done staring in the mirror, it’s time to start figuring
out what to do. The number one thing I tell everyone I’ve worked with is to save that kernel log! I mean don’t turn the miner off until you’ve gone into the settings and copied and pasted that log
into a file you save. The answer to your question may be in there.
Why save the current log? Well, for one thing that’s the “black box” of what happened. All communications between the boards and the mining pool are there and you want to know what the last thing was
that happened before things went awry. Most miners (at least ones I’ve worked with) don’t save the log if you power down and reboot, it will instead overwrite with a new one. There are ways to
recover it however, but unless you’re a more advanced user that knows how to SSH in, that’s not something that is easy to walk through.
There are many things that the kernel log can tell us that will point directly to the issue at hand, below are just some of the more common ones (and some uncommon) that I’ve come across:
Chain X ASIC 0 !!!
0 ASICS isn’t always a death sentence.
Another very frustrating error with very little data. Luckily most of the time this is a rather simple fix for being such a vague error. This error essentially tells you that when the control board
polls the hash board (via I2C bus) it gets either no response or an improper response from the hash boards PIC after it initializes. Sometimes it will even come back with a number like Chain 2 ASIC
23 !!! which points to ASIC 24 as the specific issue, however a ASIC 0 points to a few things we can try to fix.
CHECK VOLTAGES AT THE 10V BUCK CONVERTER AND 14.2V BOOST CIRCUIT.
Sometimes, more often than it should, the boost circuit on the hash board fails and subsequently asics show as missing or ‘xxxxxx’. Check out my walkthrough manual and scroll to the 14.2V Boost
Circuit for more info. Furthermore insure the output of the 10V buck converter matches the voltage you’ve set in the firmware for that chain (i.e. common voltages like 9.5V, 9.8V, or stock voltage of
The L3 has 12 voltage domains, each controlled by a single voltage regulator. Most recently Bitmain used an LN1134 but has used an SPX5205 and SGM2202 in the past. In the past these domains have
failed when the 14.2V circuit failed so it’s a good idea to check the voltage at each domain. This can be done by checking the voltage between pin 2 (middle pin on LDO) and pin 1 (input ~2.4V) and
pin 5 (output ~1.8V.) Check this at each LDO (i.e. U75, U76, U77…) Check out my walkthrough manual and scroll to power domains for more info.
From time to time intermittent problems like this can be solved by shutting the unit down for 30 seconds and then rebooting. This isn’t a long term fix but may get your unit back up and running for
the time being.
Go into your advanced settings for the problem hash board and try lowering the frequency and upping the voltage to at least 9.8V. Tuning the hash boards to run on minimal speed and power can have the
board operating at the edge of its ability to function. Resetting the PIC to a more normal operating condition may solve your problem. Likewise operating at too high a frequency and power can
potentially shorten the life of components or operate on the edge of functionality.
Sometimes reloading the firmware, especially with one that allows autotune, can help isoloate or even fix the problem if it’s with the PIC.
An intermittent connection can change with environmental conditions . Heat and cold can flex cold solder joints and ultimately lead to failures. I’ve found that reheating and reflowing the joints on
the temp sensor, buck converter, and PIC have resolved problems I’ve had in the past with missing components.
Reflowing solves the problem (sometimes.)
Fan Errors
There are a couple errors that can be associated with fans and they’re pretty straight forward to troubleshoot. The first is below, some older versions of the L3 firmware had this error, most newer
and after-market versions don’t however. This points to a failing or failed fan.
Fatal Error: Some Fan Lost or Fan Speed Low!
However as the fans do start to fail, below is the message you may get. For reference fan 1 is the fan plugged into the connector closest to the front of the unit.
Fan Err! Disable PIC! Fan1 speed is too low 390 pwm 44
Fan Err! Disable PIC! Fan1 speed is too low 0 pwm 100
Another thing that points to fan errors is no hashing as the firmware shuts down the hash board if there’s a fan failure to save a thermal runaway condition. Basically, change your fan, eBay is a
gold mine for these!
Get Temp Data Failed
This is pretty straight forward and almost always points to a bad TMP451 on your hash board. This error tells you exactly which chain has the failure so you can first try and reflow the solder on the
TMP451 (the quality varies) or replacing the chip. I have a link to replacement parts here. The caveat is that this doesn’t always affect hashing and some firmware versions will regularly report this
error due to a firmware glitch (per Bitmain.)
Get [1]Temp Data Failed!
Network Errors
Net Err! Frustrating and generally means you’re not making $. Most of the time you may see an error like below:
Net Err! lastest job more than 2 mins! waiting …
This is most likely due to one of three common problems.
1. You have a poor network connection
2. Your network cables or router have failed
3. Your stratum address is entered incorrectly
Number of “x” = 4 of Chain 0 Has Exceeded Threshold (on any Chain…)
This can point to a number of issues. Generally when you see this error in the kernel log that is the point at which the hash board shuts down or, if you have a hash rate watchdog enabled, it will
reboot. This is generally due to boards overheating or running too “hot” of a frequency. These issues can generally be remedied by lowering the frequency you’re operating the hash board at.
Scanreg Error
The infamous scanreg/crc5 error can be quite frustrating, but know that the secret to this generally lies in the ASIC chips themselves. The problem generally starts by seeing a series of the
following in your kernel log:
bitmain_scanreg,crc5 error,should be 00, but check as 01…..
This is essentially pointing to an ASIC failing its self test. Sadly I haven’t seen a way in the kernel log to point to a specific ASIC, but following my post on CRC5 errors will help you track it
down using your multimeter.
Installing 120V or 240V 20A branch circuits for your crypto miners
In my series of, “I know what I want to see, but what do you want to see?” videos I had someone ask about the installation of basics 120V or 240V circuits. Well, you got it. I’ve done electrical work
for years off and on, that along with some University training should hopefully lead to some useful insight in the below video.
Of note, and a disclaimer, I am not a licensed Electrician, so please consult an Electrician, NEC code, or your local inspector before energizing any circuits. If you’re doing this in your home
you’ll most likely need to pull a permit for it anyway, but many areas allow homeowners to do their own work as long as it meets code.
Cooling and Quieting down an L7/S19
Ever have the feeling that you’re about to have an aircraft take off from your house? If so, you’re in good company (and making money) but the FAA hasn’t found you yet and last thing you need is the
HOA bitching about your plane not fitting in your garage (and you left your garbage cans out.)
So I had been running a Bitmain L7 aircraft, or miner, for about a week and couldn’t get the fans under ~5300RPM, which btw creates that nice high pitched whine you can hear throughout the house. My
setup was an open inlet, an 8″ dual fan shroud (https://ebay.us/13zn51) on the exhaust to an 8″ duct (amzn.to/3DgKYy4) , then ducted pretty much straight out a window. I have a 6″ duct in the window
as well to keep fresh air vented in. With this setup I saw the following on the L7:
Output Fan RPM – 5280 – 5500 RPM
Output Temps (highest board 74C/72C)
81 dB @ 1 foot (measured)
Good, not great, but good. How can we get these temps down I thought and if I get them down enough will the fans slow down?
To increase airflow in the exhaust duct I installed a 420 CFM 8″ duct fan (amzn.to/3wF6XNR) and that brought me to the following on the L7:
Output Fan RPM – 5180 – 5280 RPM
Output Temps (highest board 73C/71C)
80 dB @ 1 foot (measured)
Duct fan power usage – 55W (measured)
Regardless of the dB level, the fans were still running high enough that the fan whine just resonated through the house, there’s got to be a better way, STOP THE MADNESS! Well, if some airflow
brought it down a little, a lot of airflow may bring it down a lot?
If you read my previous bloguverse post on What’s my fans CFM and how do I measure it then you know that a 420 CFM vent fan might not be enough to create negative pressure with the Bitmain L7 fans
running that high. A big reason for an exhaust duct fan is to create the negative pressure to help cool and vent the unit. A tell tale sign your duct fan is working in this fashion is the miner fans
themselves slowing down while temps stay the same or drop. Another way to test this is to cut a small opening in the duct between the miner exhaust and duct fan. If air is blowing out then your duct
fan isn’t creating negative pressure, it’s actually adding resistance. If air is drawn into the opening, then your duct fan is functioning as designed (fancy term for OK.)
So next step, I bought an adjustable 735 CFM 8″ duct fan (https://amzn.to/3IVnXC5) and cranked it up around 80%/600 CFM (measured) and got an interesting result. Slowly the fans on the L7 started
slowing down, all the way down to about 3200 RPM, which is where they’re sitting now after a few days. They bounce between 3200-3400 RPM, but most important the fan whine is gone and the duct fan
isn’t any louder than the lower RPM Bitmain fans. Another quick point, the first duct fan I used was all metal, and the second was a polymer. I did several temp measurements at the exhaust, just
before the duct fan (32C/90F @ 1 foot), and found that they were low enough to use a polymer fan (rated at 140F) without the fear of it melting or having any deformation.
Here’s the data from the L7 and it’s hashing around 9300 MH/s:
Output Fan RPM – 3180 – 3380 RPM
Output Temps (highest board 74C/72C)
73 dB @ 1 foot (measured)
Duct fan power usage – 136W (measured)
So a significant drop in dB, but most importantly once the fans got under ~4000 RPM the fan whine was all gone. If you know much about the dB scale, or even if you don’t, it’s logarithmic. What that
means is that if something, like having a conversation with your cat Fluffy (if you’re into that), is measured at 60 dB, then turning on a vacuum will scare the cat. Why? Maybe old Fluffs hates
vacuums, but maybe it’s the fact that a vacuum would be considered a factor of 10 times louder, scaring your feline friend. Now what about carrying your little Fluffer-nutter outside near a busy
intersection? Well you’ll get the crap scratched out of you, probably because traffic is roughly 20 dB louder than your friendly conversation with Fluff-daddy, which equates to a noise 100 times
greater than the secrets only you and your cat share.
So when I go from 81 dB to start, down to 73 dB, I’ve cut my basement hobby down from Grand Central Station to a white noise coming from below, maybe even fooling my wife into thinking I’m vacuuming!
Trust me, mama will never get mad at you if she thinks you’re cleaning.
In summary, and what’s most important other than making $$$, happy wife happy life!
No cats were harmed in the making of this bloguverse post.
What’s my fans CFM and how do I measure it?
Part of managing your crypto miner is finding the perfect balance between performance, power usage, cooling, and noise. It’s not often easy, or even fair, how the balance works out. Often times I
find myself sacrificing hashing power in favor of cooling and noise (happy wife happy life…)
So let’s talk about that, cooling and noise. Is there a way to pull this off and still keep a reasonably high level of hashing? Short answer, of course yes, long answer is still to come.
I did a little research into the actual air movement with respect to fan speed, and threw in some sound data (db @ 3″) for fun. I didn’t include temperature data because all hash boards and all
systems running at all different voltage levels would just add too much confusion into the data (which may already be confusing enough!)
The test data was from a stock L3+, 4 hash boards running at 384 MHz (504 MH/s), using stock 6000 RPM fans, and I ran the fans from 20% speed to 100% speed (set in the firmware.) Each measurement I
ran I took 3 different times after resetting the speed in the firmware. This gave slightly different results each time. One dataset was significantly different then I realized my measuring point was
off for that set so I tossed it. The CFM was measured at one corner of the intake and exhaust where I saw the greatest volume. I measured the sound (db) at 3″ since I have other miners running and
much further out from that it modified the experiment too much. 3″ allows us to track the trend of sounds with respect to fan speed, but won’t give you a relative db level to explain to your family
why your basement/garage/shed/man cave is so loud.
Below are graphical reports of the data I collected. What I found most interesting was the actual CFM that I measured the fans at. As you can see the CFM drops off as the RPM drops off.
This is a basic and straight forward view, but to understand the relationship of CFM to RPM I normalized it versus 100 RPM. Basically this tells us that the relationship is directly proportional,
i.e. speed up the fan 10% and you get 10% more CFM.
Now what about noise? While we all care about the last two graphs, mama and the family really only care about one thing, noise (two if you count money but I didn’t collect data on happiness versus
profitability yet…)
What the chart shows is that you have a significant drop in noise going from 6000 RPM (100%) to around ~3700 RPM (50%.) First off, I know 3700 isn’t 50% of 6000, but that’s up to Bitmain and how they
wrote their firmware. But what it does tell me is that if I can tune my boards so that I only have to run around 3700 RPM then I’ve turned my indoor gas lawnmower into pleasant office noise (and
minimized my chances at even more hearing loss.)
Also, one bonus chart, ever wonder how to relate fan speed % to RPM in the firmware, here’s what I came up with. While actual RPM did vary slightly, it didn’t seem to vary more than 1% each time I
set at each specific fan speed percentage.
The big thing to remember as well, the speeds and CFM (not the db necessarily) are based off the fan manufacturer so these values are good on any miner that uses this stock 6000 RPM fan.
Hope y’all find this useful!
Don’t skimp on the solder when reflowing an ASIC…
What is that I see above? That’s ASIC 28 on an L3+ hash board. You probably can’t tell what’s going on here, but I’ll attempt to explain why the image above is bad and I promise I have a better one
I was working on a hash board that showed the infamous “0 asics found” and eventually, through blood, sweat, anger, hunger, and tears found that ASIC 28 was actually bad and wound up affecting the
entire hash board. Sometimes the ASIC tester points to the exact cause, but most of the time it’s just you, a multimeter, and a worthless $200+ hash board test jig.
Let’s fast forward to me soldering on my first DFN (dual-flat no-leads) package. At first they look intimidating, but once you get the hang of it they’re actually quite simple to put on. The only
thing you have to worry about (which bit me in the end, hence why I’m writing this) is that if you don’t have enough solder on the power and ground pads, you won’t get a good enough connection for
the ASIC to function.
I followed a few videos online (from Antminer Repair) to learn the basics, but since I didn’t have the fancy solder stencil, I was forced to do my favorite thing, improvise.
I started the painstaking process of removing the old ASIC, which can be a problem itself. Bitmain doesn’t use a low temp solder so you really have to heat things up for a while before the ASIC will
give up its death grip. Unfortunately I did a little damage to the PCB (printed circuit board) when removing it, fortunately it was a NC (no connect) pin. You’ll also want to remove the heat sinks
from the ASICs around the one you’re replacing so you can get some room to work. About 30 seconds from a heat gun is enough to pop these off easily.
Luckily Pin 15 (torn pad) is a NC (no connect) and on the side you see some copper showing from me prying the old ASIC off.
After following the advice from Antminer Repair on Youtube I tinned both the pads on the PCB and on the ASIC itself. I then placed the part on and heated it until it self aligned and slowly held it
in place until the solder hardened. Sounds way too simple, but easy day, right?
After cooling and cleaning I went and plugged the hash board in, nothing, “0 asics found”, I was defeated. I broke out my multimeter and started measuring all the signals (RI, RST, CLK, BO, and CO.)
I even measured out the resistance between pins and pads to make sure I hadn’t damaged anything. I was at a loss. Finally it dawned on me, maybe there’s poor connections from the ASIC to the PCB. I
don’t have x-ray vision, so I got the next best thing, an actual x-ray.
ASIC 28 (center) with poor connection to power and ground due to me being skimpy with the solder.
As you can see, or maybe not, look at the ASIC above and below, there is a large solid mass where the power and ground planes are. Now look at the one in the middle, not so much. My aha moment was
here, without proper power and ground this puppy wasn’t going anywhere.
I reflowed the ASIC and removed it, added a bit more solder, and viola, 72 ASICS found, my dream come true.
Don’t skimp, don’t be cheap, and practice practice practice (or do what I’m going to do in the future and pay a professional.)
How many miners, how big of a miner, can I fit on a circuit?
More and more I’ve seen questions from folks about how many, or how big, of a miner can I fit on a circuit. This is a question that has many unknown variables that I plan on diving into below.
Disclaimer – Please keep in mind this is my opinion and not an actual consultation for your mining business, so before you do any wiring or make any changes to your electrical system please consult a
licensed electrician.
Now on to the good stuff. In order to answer the question how how many/how much we need to know a little more information; the basics of your panel, the basics of your circuits, the basics of the
miner, and anything else you plan to operate.
For starters, each location in question needs to have a load calculation done to insure that you aren’t going to draw more than 80% of the maximum panel rating. As an example, if you have a dedicated
100A panel you can’t draw more than 80A continuous at any time. Since the mining world is in watts, let’s use Ohms law to break that down.
100A Panel @ 80% rule = 80A
Ohms Law tells us that Power = Current x Voltage
80A x 240V = 19,200W
So the key thing to keep in mind before planning out circuits is that you can’t have a continuous load more than 19,200W on this panel.
Now that we know the maximum wattage of our panel, we can use a little math to plan out what our circuits need to be. I’ll make the assumption that whatever miner we are using operates on 240V since
that’s the industry standard at the power level they operate on.
There are 2 main amperage ratings for 240V breakers that are used in the mining world (yes I know there are more but these are the most common), 20A and 30A. There are several reasons for choosing
one over the other, ranging from length of wire run, number of units per circuit, cost of wiring, and available breaker space. The most important thing to remember about each circuit is that they
also have a maximum wattage rating:
20A Circuit @ 80% rule = 16A
16A x 240V = 3840W
30A Circuit @ 80% rule = 24A
24A x 240V = 5760W
So we know the maximum wattage our panel will hold, and we know the maximum wattage different types of circuits hold, only thing we are missing is what type of miner we’re running and what its power
requirements are.
Since this is February 2022 I’ll choose two scenarios, the first is using a newly released miner, the second is using a couple generations old miner. I’m partial to Bitmain products so I’ll look at
the S19 or L7 for one scenario (~3250W) and the L3++ for the other (~900W, don’t hate on my S9 people!)
We know that our maximum panel wattage is 19,200W, so how many of each miner could we theoretically operate (I say theoretical since we have to split this up in to several circuits and the math may
not work out perfect.)
S19/L7 = 3250W
19,200W / 3250W = 5.9
5 S19/L7 = 16,250W
So that leaves us with 2950W available in the panel for other electronics, fans, network equipment, etc.
L3++ = 900W
19,200W / 900W = 21.33
21 L3++ = 18,900W
So this leaves us with only 300W available in the panel for other electronics, fans, network equipment, etc. Of course this is only relevant if you plan on running everything for your operation off
this one panel.
Now that we know what our options are it’s more simple at this point, with a little math for the L3++ scenario.
For the S19/L7 scenario it’s clear that we need 5 – 20A circuits, one circuit for each unit. This allows us to dedicate one circuit per unit and keeps us under the 80% rule for the full panel load
(we are only using 16,250W which is only 68%.) Unfortunately you can’t fit two of them per 30A circuit like some other units (you’d need to be closer to 2800W to do this) but we have plenty of space
in the 100A panel for 5 20A breakers. Keep in mind that 20A breakers will use 12ga wiring whereas stepping up to a 30A breaker requires 10ga wiring.
For the L3++ scenario we use a little math. We can put 4 units per 20A circuit (3600W) or 6 units per 30A circuit (5400W). To maximize the number of units, and keep it simple, we can also go with 5 –
20A breakers, 4 units for each circuit. This keeps us under the 80% rule for the full panel load (we are only using 18,000W which is only 75%.) Once again we will use 12ga wiring for this.
This was a quick down and dirty calculation and please know that there is more in depth planning that would go into an actual circuit design (like run lengths, ventilation, other equipment draw, main
panel draw, etc.)
No chips, no temp, no hash, no problem!
Hey all,
How many times have you seen a post by someone that has a hash board that doesn’t recognize all the ASICs on board, has no temperature reading, and isn’t hashing or the hash rate starts then rolls
down to zero?
Well, I’ve seen it a lot, and now just experienced the fun. I had an L3+ hash board I bought on Ebay that was a “FOR PARTS ONLY” so I took that as a challenge. When I first plugged it in it actually
worked properly, so much so that I installed it into one of my L3+ units thinking I just hit the gold mine. Well, as soon as I put it in it started having issues; all ASICs found but no hashing, 56
ASICs found the next time, then 0, and so on.
I followed a few simple steps to work through the problem and viola, within 15 minutes I narrowed the problem down to one of the LDO’s that was failing. Using a heat gun I removed the part, replaced
it with a similar part (LN1134), a little solder paste and a little heat and that bad boy mounted right down. I find with these smaller parts it’s easier to use solder paste, smear a little on to the
pads, the heat gun will reflow it and pull the part on to the pads fairly well.
I put a quick video together to show the process of finding the failure, as always love the feedback and any experiences similar or not.
Maximize your circuits and minimize costs
To continue the journey into setting up your crypto miners, specifically the L3+, you should start considering a long term electrical plan. What I mean by this is how can you optimize your existing
electrical circuits in your home, office, shed, or wherever to gain the most MH/s(AVG) per watt (W/MH) and overall the most MH/s(AVG) per circuit.
Update 12/27/21: I’ve had many folks ask how to measure their power draw. One solution that works very well is to install a Sense Energy Monitor on either your main electrical panel or a sub-panel
that you have dedicated to your miners. This will give you real time feedback on the power (watts) used by your devices and make it easier come tax time to properly divide up your electrical bill and
have the proof of the percentage you dedicate to your mining operation.
I’ll assume that all units will operate off 240V for this, as it’s generally considered the most efficient as you pass less current through the wiring than you would if you went the 120V route, which
minimizes cost and power transmission loss.
After some testing of various operating power/efficiency levels (Overclocking on the L3+, is the juice worth the squeeze?) of the L3+, I’ve got a good data set that gives me an efficient range to
operate the L3+ within. So what now, data, great, how do I put it to use?
I ran the tests at frequencies from 384MHz (stock) to 500MHz and each frequency I ran at 9.5VDC, 9.8VDC, and 9.92VDC. The most ideal setting (with the best W/MH) for overall hashing rate was 469MHz,
giving us ~608MH/s(AVG) @ 1.54W/MH (935W total.) The most ideal setting for overall efficiency (W/MH) was 384MHz, giving us ~504MH/s(AVG) @ 1.4W/MH (695W total.) A midrange that balances the two was
450MHz, giving us ~576MH/s(AVG) @ 1.52W/MH (873W total.) We also have to add in the wattage for the control board and fans. I took some measurements with an ammeter and found that the control board
was only drawing about 10W and the fans, albeit variable, will generally draw no more than their max rating which would be ~30W each.
Note: These are all numbers that have not had any type of auto-tuning done at the individual chip level so your actual numbers can vary depending on that process if you chose to do it. These are just
baseline numbers to go off of.
For those that aren’t familiar with residential or commercial wiring, a quick note on how much to load the circuits. The National Electric Code (NEC) essentially requires that each circuit have the
ability to carry 125% of the continuous load. So if we have a 20A circuit, that is our theoretical 125%, which puts the continuous load at 16A (16A x 125% = 20A.) Head math shows that 16A is 80% of
20A, hence the 80% rule. After we determine the size of the circuit (i.e. 20A) we then reference the NEC code to find the appropriate wiring gauge for the circuit. This is code for one very good
reason, you don’t want to overload and heat a smaller gauge wire too much or you’ll burn it up, and burn down your structure. I’m sure many folks have seen this on the DC side with wiring from power
supplies to either ASIC miners or GPUs. I’ve chosen to use 20A for most my setups, mainly due to cost of the wire (12 gauge wire is significantly cheaper than 10 gauge), but also the efficiency
calculations you’ll see later on in this post.
So let’s get into the meat and potatoes of what this is all about. I’ve listed out the most common circuits you’ll find and created scenarios based off those.
20A/240V – Given the 80% rule we have 3840W available to support our L3+ units.
L3+ @ 469MHz = 935W + 10W (control board) + 60W (fans) = 1005W total.
3840W / 1005W = 3.82, so basically we can only run 3 L3+ units with plenty of room to spare and we are getting 1,824MH/s(AVG) out of the 20A circuit.
As a side note, we can toss one more L3+ in there at the most efficient setting (see below for wattage calculation) and that puts us at a total of 3780W and 2,328MH/s(AVG).
L3+ @ 384MHz = 695W + 10W (control board) + 60W (fans) = 765W total.
3840W / 765W = 5.02, so now we’re up to 5 units and we’re getting 2,520MH/s(AVG) out of the 20A circuit.
L3+ @ 450MHz = 873W + 10W (control board) + 60W (fans) = 943W total.3840W / 943W = 4.07, so now we’re at 4 units and we’re getting 2,304MH/s(AVG) out of the 20A circuit.
30A/240V – Given the 80% rule we have 5760W available to support our L3+ units.
L3+ @ 469MHz = 935W + 10W (control board) + 60W (fans) = 1005W total.
5760W / 1005W = 5.73, so basically we can only run 5 L3+ units with plenty of room to spare and we are getting 3,040MH/s(AVG) out of the 30A circuit.
As a side note, we can toss one more L3+ in there at the most efficient setting and that puts us just over the 30A circuit at 5790W. Promise me you’ll unplug one intake fan (-30W) and that would give
us 3,544MH/s(AVG).
L3+ @ 384MHz = 695W + 10W (control board) + 60W (fans) = 765W total.
5760W / 765W = 7.52, so now we’re up to 7 units and we’re getting 4,032MH/s(AVG) out of the 30A circuit.
L3+ @ 450MHz = 873W + 10W (control board) + 60W (fans) = 943W total.
5760W / 943W = 6.11, so now we’re at 6 units and we’re getting 3,456MH/s(AVG) out of the 30A circuit.
50A/240V – Did you disconnect your AC or hot tub for these miners or something?
You probably are spending more money in wiring (code says you’ll need 6 gauge wiring) then you can make on this circuit in a week. With the wiring and conduit, you’ll spend close to $5 per foot. In
other words, stick with 20A (12 gauge wire) or 30A (10 gauge wire), the wiring is available at your local Lowes or Home Depot and comes in Romex so it’s an easier install without needing conduit.
That’s all I have on this.
In summary, efficiency is king. Running out units at 384MHz and 9.5V yields us more than an 8% gain in MH/s(AVG) in a 20A circuit and a 14% gain in MH/s(AVG) in a 30A circuit.
Individual results may vary, take it for what it’s worth, but if you have the units, keep them running efficiently and you’ll get the most bang for your buck!
|
{"url":"https://asicbasics.com/category/projects/s19/","timestamp":"2024-11-08T15:52:50Z","content_type":"text/html","content_length":"176026","record_id":"<urn:uuid:05c38e36-5ad0-433f-8e61-31343ea37ddb>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00755.warc.gz"}
|
New Frontiers in Physical Science Research Vol. 3
We present an extension of General Relativity governed by a scalar Lagrangian density, function of all the independent invariant scalars one is able to build by the powers of the Ricci tensor. The
new terms arising in the generalized Einstein filed equations, in the viewpoint of usual General Relativity, may be interpreted as dark matter and dark energy contributions. Metricity condition
fulfilled by a new tensor different than the usual metric tensor is also obtained. More it is shown that Schwarzschild-De Sitter, Robertson-Walker-De Sitter and Kerr-De Sitter metrics are exact
solutions to the new field equations. Remarkably the form of the equation of the geodesic trajectories of particle motions across space-time remains the same as in Einstein General Relativity unless
the cosmological constant \(\Lambda\) is no longer a constant becoming a function of the space-time co-ordinates.
|
{"url":"https://stm.bookpi.org/NFPSR-V3/issue/view/845","timestamp":"2024-11-13T17:42:58Z","content_type":"text/html","content_length":"38717","record_id":"<urn:uuid:58db0478-b9b4-4052-834f-d30d0bb29a7a>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00080.warc.gz"}
|
Quantum Computer Systems
Lecture Series
Organized by Zhiding Liang (Notre Dame) and Hanrui Wang (MIT)
Quantum Knowledge Worth Spreading
A quantum computer system lecture series from introduction to advanced research topics. Aiming to build basic quantum knowledge and introduce latest research for the audience without quantum
computing background but interested in.
Lecture 65 by Waild Saad (Virginia Tech)
Scaling Quantum Communication Networks
Lecture 64 by Tzu-Hsuan Huang (National Tsing Hua University)
Decoder for quantum error correction codes: from the perspective of classical error correction codes
Lecture 63 by Ivana Nikoloska (Eindhoven University of Technology)
Engineer’s Guide to Machine Learning with Quantum Computers
Lecture 62 by Yuxuan Du (Nanyang Technological University)
Multimodal Learning for Cross-Platform Verification in Early-Stage Quantum Computing
Lecture 61 by Kanad Basu (University of Texas at Dallas)
Efficient Quantum Verification Approaches
Lecture 60 by Chunhao Wang (Pennsylvania State University)
Quantum algorithms for simulating the dynamics of open quantum systems
Lecture 59 by Shaolun Ruan (Singapore Management University)
Achieve Explanable Quantum Computing Using Visualization
Lecture 58 by Prof. Mark Wilde (Cornell)
Variational Quantum Semi-definite Programming
Lecture 57 by Daniel Puzzuoli (IBM)
Calibrating Numerically Optimized Pulses
Lecture 56 by Korbinian Kottmann (Xanadu)
ODEgen: Analytic Gradients of Quantum Pulse Programs
Lecture 55 by Kaitlin Smith (Northwestern)
Scaling Quantum Computing Systems via Codesign
Lecture 54 by Prof. Yufei Ding (UCSD)
Quantum Computing Systems: Challenges and Opportunities
Lecture 53 by Dr. Weitang Li (Tencent Quantum Lab)
Quantum Computational Chemistry: Variational Quantum Eigensolver and the Design of Ansatz
Lecture 52 by Prof. Xiantao Li (Penn State)
Open Quantum Systems in Quantum Computing
Lecture 51 by Prof. Prabhat Mishra (UFlorida)
Design Automation for Quantum Computing
Lecture 50 by Mitch Thornton (Southern Methodist University)
Quantum Oracle Synthesis with an Application to QRNG
Lecture 49 by Archana Kamal (University of Massachusetts Lowell)
Quantum Reservoir Engineering for Fast and Scalable Entanglement Stabilization
Lecture 48 by Evan McKinney (Pittsburgh)
Quantum Circuit Decomposition and Routing Collaborative Design
Lecture 47 by Daniel Silver (Northeastern)
Quantum Machine Learning on Current Quantum Computers
Lecture 46 by Dr. Michael Goerz (U.S. Army Research Lab)
Numerical Methods of Optimal Quantum Control
Lecture 45 by Tianyi Hao (UW Madison)
Enabling High Performance Debugging for Variational Quantum Algorithms using Compressed Sensing
Lecture 44 by Ricky Young (QBraid) and Mert Esencan (Icosa Computing)
Quantum-Inspired Optimization for Real-World Trading by Icosa powered by qBraid
Lecture 43 by Prof. Zheng (Eddy) Zhang (Rutgers)
A Structured Method for Compilation of QAOA Circuits in Quantum Computing
Lecture 42 by Prof. Zichang He (JPMorgan)
Align or Not Align? Design Quantum Approximate Operator Ansatz (QAOA) with Applications in Constrained Optimization
Lecture 41 by Prof. Giulio Chiribella (University of Hong Kong)
The Nonequilibrium Cost of Accuracy
Lecture 40 by Dr. Ruslan Shaydulin (JPMorgan)
Parameter Setting in Quantum Approximate Optimization of Weighted Problems
Lecture 39 by Thomas Alexander (IBM)
Control Systems & Systems Software @ IBM Quantum
Lecture 38 by Dr. Daniel Egger (IBM)
Pulse-based Variational Quantum Eigensolver and Pulse-Efficient Transpilation
Lecture 37 by Dr. Ji Liu (Argonne National Laboratory)
Elevating Quantum Compiler Performance through Enhanced Awareness in the Compilation Stages
Lecture 36 by Dr. Samuel Yen-Chi Chen (Wells Fargo)
Hybrid Quantum-Classical Machine Learning with Applications
Lecture 35 by Prof. Zhu Han (Houston)
Hybrid Quantum-Classic Computing for Future Network Optimization
Lecture 34 by Zhirui Hu (George Mason)
Optimize Quantum Learning on Near-Term Noisy Quantum Computers
Lecture 33 by Lia Yeh (Oxford)
Quantum Graphical Calculi: Tutorial and Applications
Lecture 32 by Prof. Yuan Feng (University of Technology Sydney)
Hoare logic for verification of quantum programs
Lecture 31 by Dr. Marco Pistoia (JPMorgan)
Quantum Computing and Quantum Communication in the Financial World
Lecture 30 by Dr. Thinh Dinh (Vietnam National University)
Efficient Hamiltonian Reduction for Scalable Quantum Computing on Clique Cover/Graph Coloring Problems in SatCom
Lecture 29 by Prof. He Li (Southeast University (China))
Rethinking Most-significant Digit-first Arithmetic for Quantum Computing in NISQ Era
Lecture 28 by Dr. Naoki Kanazawa (IBM Quantum)
Pulse Control for Superconducting Quantum Computers
Lecture 27 by Minzhao Liu (UChicago)
Understanding Quantum Supremacy Conditions for Gaussian Boson Sampling with High Performance Computing
Lecture 26 by Charles Yuan (MIT)
Abstractions Are Bridges Toward Quantum Programming
Lecture 25 by Jiyuan Wang (UCLA)
QDiff: Differential Testing for Quantum Software Stacks
Lecture 24 by Runzhou Tao (Columbia)
Automatic Formal Verification of the Qiskit Compiler
Lecture 23 by Yuxiang Peng (University of Maryland)
Software Tools for Analog Quantum Computing
Lecture 22 by Yasuo Oda (JHU)
Noise Modeling of the IBM Quantum Experience
Lecture 21 by Prof. Jun Qi (Fudan)
Quantum Machine Learning: Theoretical Foundations and Applications on NISQ Devices
Lecture 20 by Prof. Mohsen Heidari (IU, Bloomington)
Learning and Training in Quantum Environments
Lecture 19 by Prof. Nai-Hui Chia (Rice)
Classical Verification of Quantum Depth
Lecture 18 by Prof. Gushu Li (UPenn)
Enabling Deeper Quantum Compiler Optimization at High Level
Lecture 17 by Prof. Tirthak Patel (Rice)
Developing Robust System Software Support for Quantum Computers
Lecture 16 by Wei Tang (Princeton)
Distributed Quantum Computing
Lecture 15 by Zeyuan Zhou (JHU)
Quantum Crosstalk Robust Quantum Control
Lecture 14 by Bochen Tan (UCLA)
Compilation for Near-Term Quantum Computing: Gap Analysis and Optimal Solution
Lecture 13 by Prof. Guan Qiang (Kent State)
Enabling robust quantum computer system by understanding errors from NISQ machines
Lecture 12 by Prof. Jakub Szefer (Yale)
Quantum Computer Hardware Cybersecurity
Lecture 11 by Prof. Prineha Narang (Harvard)
Building Blocks of Scalable Quantum Information Science
Lecture 10 by Yilian Liu (Cornell)
Solving Nonlinear Partial Differential Equations using Variational Quantum Algorithms on Noisy Quantum Computers
Lecture 9 by Prof. Chen Qian (UCSC)
Protocol Design for Quantum Network Routing
Lecture 8 by Dr. Gokul Ravi (UChicago)
Classical Support and Error Mitigation for Variational Quantum Algorithms
Lecture 7 by Dr. Junyu Liu (UChicago)
Lecture 6 by Prof. Tongyang Li (Peking University)
Adaptive Online Learning of Quantum States
Lecture 5 by Prof. Robert Wille (Technische Universität München)
Design Automation and Software Tools for Quantum Computing
Lecture 4 by Siyuan Niu (Monteplier)
Enabling Parallel Circuit Execution on NISQ Hardware
Lecture 3 by Jinglei Cheng (USC)
Introduction to Variational Quantum Algorithms
Lecture 2 by Zhixin Song (Gatech)
A Guided Tour on the Map of Quantum Computing
Lecture 1 by Prof. Yongshan Ding (Yale)
Software and Algorithmic Approaches to Quantum Noise Mitigation: An Overview
|
{"url":"https://www.qucs.info/","timestamp":"2024-11-12T16:35:14Z","content_type":"text/html","content_length":"478333","record_id":"<urn:uuid:6fc5989e-14e5-4195-9f77-ae5deb64bea3>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00062.warc.gz"}
|
G63.2480.001: ADVANCED TOPICS IN APPLIED MATHEMATICS
(Vorticity and Imcompressible Flows)
3 points, Spring Term 2011
Thursday, 3:20PM - 5:00PM
: no background besides some familiarity with elementary PDE is needed.
The course will cover material in chapters 1-5 and 7 of the text by Majda and Bertozzi. If time permits, there will also be some lectures on statistical theories for vortices and an introduction to
hurricane dynamics. The main goal is to introduce graduate students, post docs, and visitors to the fascinating interplay among exact solutions, nonlinear analysis, numerical computing, and
statistical ideas in developing intuition about fluid flow.
Vorticity and Incompressible Flow
, Majda & Bertozzi, Cambridge University Press, 2002
Recommended text:
Introduction to PDE and Waves for the Atmosphere and Ocean
, Majda, Courant Lecture Notes, #9, AMS
G63.2480.001: ADVANCED TOPICS IN APPLIED MATHEMATICS
(Real Time Filtering of Turbulent Signals in Complex Systems)
3 points, Spring Term 2010
Thursday, 3:20PM - 5:00PM
Grading: This course will be set up as a reading course.
G63.2830.001 ADVANCED TOPICS IN APPLIED MATHEMATICS (Vorticity and Incompressible Flow) - CLOSED FOR THE SEMESTER
3 points. Fall Term 2009
Thursday, 3:15PM - 5:00PM
Prerequisite: no background besides some familiarity with elementary PDE is needed.
The course will cover material in chapters 1-5 and 7 of the text by Majda and Bertozzi. If time permits, there will also be some lectures on statistical theories for vortices and an introduction to
hurricane dynamics. The main goal is to introduce graduate students, post docs, and visitors to the fascinating interplay among exact solutions, nonlinear analysis, numerical computing, and
statistical ideas in developing intuition about fluid flow.
Grading: this course will be graded as a seminar course.
G63.2840.001: ADVANCED TOPICS IN APPLIED MATHEMATICS (Fluctuation Dissipation Theorems and Climate Change)
3 points, Spring Term 2009
Thursday, 3:15PM - 5:00PM
Can one do climate change response by computing suitable statistics of the present climate: This is an applied challenge of obvious practical importance. This class focuses on these issues from the
viewpoint of modern applied mathematics, where ideas from dynamical systems, statistical physics, information theory, and stochastic-statistical dynamics will be blended with suitable qualitative and
quantitative models and novel numerical algorithms to attach these questions.
The course has no formal requirements, but familiarity with elementary ODE and SDE is useful background. Chapters 2 and 3 of the book, "Information Theory and Stochastic for Multiscale Nonlinear
Systems" by Majda, Abramov and Grote (American Mathematical Society) will provide the introductory material for these topics. Additional material, such as coping with model error and ensemble
predictions, will also be discussed.
Grading: This course will be graded as a seminar course.
|
{"url":"https://math.nyu.edu/faculty/majda/TEACHING.html","timestamp":"2024-11-02T10:41:22Z","content_type":"text/html","content_length":"9010","record_id":"<urn:uuid:bbfccbad-724c-4558-a291-784462f85179>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00562.warc.gz"}
|
Skin cancer mortality rate vs latitude: onlinecourses.science.psu.edu/stat462/node/91/
This semester has taught me that statistics is more then just number and data. It thought me how to analyze which direction datas provided skews to by analyzing the mean and medium. I learned how to
get the min, Q1, medium, Q3 and maximum. I also learned how to get the standard deviation as well as the r and r^2 which is very important in statistics and helps you read data much clear. This gave
me a great perspective about what statistics is about and a great experience for me to proceed with my educational career.
Final Post
The topics that were taught in this course will be crucial in my academic journey ahead as well as my career. Analyzing data, predicting outcomes and noticing correlations are important tools that I
will need in my field.
Final Post
This semester I learned a lot of topic that I had never known before like theĀ standard deviation on excel and R, sample mean, andĀ learning R codes from data-camp. I think the topics that I had
learn will be useful for my future courses.
Final Post
In statistics one must learn to live with uncertainty. That is the most outstanding difference that I found with other math courses. At the beginning of the course I used to grab a formula and simply
apply it when I did not understand something. But understanding each statistic and concept that is part of a formula gave me at the same time, a global view of things and how important is a look at
detail. I learned not to fear the formulas but I do not have to use them automatically either. I learned that behind those things that I mechanically made when entering Excel data and getting
results, I must understand the mechanisms that lead to the results. Finally, probability is nothing more than evaluating all possibilities. Things do not happen just because, there are a lot of
situations that had to be given to make it happen
Final Post
This semester I learned about standard deviation, chi square, sample mean, and a bit of R code. I would use these lesson when I have to code statistics in my future job.
Final Post
This semester Iā ve learned aboutĀ binomial Distribution, Continuous Random Variable and Chi-square. I am able to use excel properly. Also,datacampĀ wouldĀ helped me in other classes.
Final Post
Hey everyone,
Throughout the semester I’ve learned aboutĀ sample space, expectation and variance, and also binomial. We are also doing a project about comparing 2 numeric variables and we are going to present it
tours the end of the semester. I was able to use excel properly since I’ve never used excel in my life, so this class helped me with the fundamental of using excel.
Final Post
Hi everyone
Throughout this semester, by taking statistics and others biomedical courses such as Health care management, I understood that statistics play an important role in the biomedical field and I learned
the relationship between statistic and biomedical information. Descriptive or elementary knowledge of statistics is needed for informatics research such as decision support system evaluation,
understanding the barriers to electronic medical record implementation, information retrieval, summarization of phylogenetic analysis, or spatial clustering for outbreak detection. Knowledge of
statistics is important not only for those conducting research studies, but also for understanding the findings in the biomedical informatics literature and scientific presentations. Now I see why I
need this statistics class in order to be able to take biomedical informatics courses such as Med and Bioinformatics. Statistics is an essential aspect of biomedical informatics.
Final Post
My overall experience in this class made me realized how important statistics with probability is and how it can apply to my studies. As a biomedical informatics major, we have to use statistics and
different types of programs to read and analyze data. The R program, although it was challenging, helped me in so many ways especially in my programming for biologist class. At one point in our
class, we had to analyze DNA sequences to extract RNA sequence from a DNA template. So, the R commands can be used to make a lot of calculations and graphing. Although, I focused on a lot of python
codes in some way or form the R codes triggered a recollection of python commands because of the similarities between the two programs. For example, to do a simple calculation in R such as > Ā x <- c
( 6, 2, 9, 3, 12, 5, 11) \n Ā > mean (x) Ā 6.85 … although you can create a vector and call it in two commands, python allows you to do the same but adding a few more steps. If you were to find the
mean using the above data you’ll have to create a variable using the data above. Then import x = statistics.mean(place_variable) then print (x). Printing “x” would allow you to find the mean. So, R
is more easier and user friendly compare to python and that was one thing I admired about this class.
By Javana
Graph Analysis Homework
Age affects sleep more than any other natural factor. In REM (rapid eye movement) sleep muscles are paralyzed and dreaming is active. In NREM (non-rapid eye movement) the sleep is supposedly
thought-like rather than bizarre and hallucinogenic. The first graph shows the relationship between hours or sleep and age which has very clear negative correlation. On the other hand, the percentage
of that sleep being REM sleep has a slightly irregular negative correlation.
National Institutes of Health (US); Biological Sciences Curriculum Study. NIH Curriculum Supplement Series [Internet]. Bethesda (MD): National Institutes of Health (US); 2007. Information about
Sleep.Ā Available from: https://www.ncbi.nlm.nih.gov/books/NBK20359/
|
{"url":"https://openlab.citytech.cuny.edu/halleckmat1372fa2018/","timestamp":"2024-11-07T09:50:47Z","content_type":"text/html","content_length":"109012","record_id":"<urn:uuid:589743da-e62d-4915-8c49-4cd1518adb40>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00894.warc.gz"}
|
Simple Harmonic Oscillator #1 - Differential Equation
First of all, happy Thanksgiving everyone! I hope you spend the day happily with the people you care about, and remember to spend a moment or two reflecting on the things for which you're thankful
this year. Now on with the show:
Back when I first started writing this blog, I focused mostly on problem solving. The goal was to bridge the gap between popularization and textbook. I was always doubtful there was much of a market
for this, but of course there are at least some interested people and especially since writing is so fun I was and am I'm more than happy to fill that gap. Over the last few months though, general
grad student busyness has greatly reduced the time available for those kinds of posts. And so there's been more soft-physics kind of posts around here. Still interesting, I hope, but there's really
no shortage of that sort of thing elsewhere. As such I'm going to try to improve the ratio of more in-depth fare a bit. I can't promise I'll be super consistent about it, but here's hoping y'all will
bear with me!
Let's kick it off with perhaps the most important model in physics: the simple harmonic oscillator. It's ubiquitous in everything from solid state physics to quantum field theory, but when it comes
right down to it, the harmonic oscillator is a spring. Its defining property is that the force acting on the spring is proportional to the displacement of the mass from equilibrium. Move the mass
farther from its resting point, and the restoring force is proportionally stronger. Wikipedia has a nice image:
We can write "the force is proportional to the stretch" mathematically in the following way:
The variable x is the position of the mass on the spring, and it's a function of time. Dots denote differentiation with respect to time, so x-dot-dot is the rate of change in the rate of change of
position. Sounds bad, but that's just another name for the acceleration. The spring constant (the force produced by the spring per unit of stretch beyong equilibrium) is k, the mass of the object is
m. Now if you know about solving differential equations, we can actually find the particular function x(t) that satisfies that equation. Physicists usually solve this kind of equation by the method
of recognition - we've seen it so much we just know what the solution is. For those who haven't seen it so much, it's the pretty much the first thing you'll learn in differential equations class, and
if you don't want to take the class it's ok because the problem is not difficult and either way I'm just going to tell you the solution. ;)
The solution is thus:
Actually the solution contains k and m, but to make the equation look simpler, I've just substituted in omega, where it's an abbreviation for a slightly clumsier expression:
And that's the general solution, for arbitrary constants A and B. Now really that's not good enough. We might have started this oscillator off close or far from equilibrium, or we might have just let
it go or given it a good shove. These initial conditions - the initial position and initial velocity - determine what A and B are for our specific physical situation. Let's go ahead and nail the
situation down. Call the initial position x0. That's the value of the position at t = 0 when the clock started and the system started oscillating. At t = 0, the value being plugged into sin and cos
is 0. But sin(0) = 0 and cos(0) = 1, so we see that x0 = A. Nice, that pegs one of our constants. Now do the other. To find the velocity from the position equation, we differentiate with respect to
time. Doing this with the knowledge of A, we see that:
Now we're working with initial conditions, so set t = 0 again. The sin term goes to zero, the cos term goes to 1, and therefore the initial velocity v0 = B*omega. Solve for B, substitute into the
general solution:
Ok, that's great but what does it mean? For starters, it means that the simple harminic oscillator oscillates. It repeats its motion over and over with an angular frequency of omega. We might have
guessed that it would oscillate, but thanks to the math we know it does so in an exactly sinusoidal way. Further, we now know the timing of this oscillation in terms of other physical constants, and
we can relate amplitudes and velocities to the initial conditions. It's quite a bit of an improvement over the qualitative "back and forth" description we might have managed without math.
Why is this post given a #1 in the title, by the way? It's because there's about a zillion different and important ways to so physics with the harmonic oscillator. Even from a purely classical
perspective, there's this, the Lagrangian formulation, the Hamiltonian formulation, the Poisson bracket formulation, action-angle variables, you name it. Plenty of these are graduate level, but I
think I can make the interesting in a guided-tour way for those who aren't fluent in math-speak. I plan to tackle many of these methods over the next weeks.
More like this
A few days ago we looked at what a Lagrangian actually is. The short of it is that it's the kinetic energy minus the potential energy of a given mass*. More importantly, if you construct the
classical action by integrating the Lagrangian over the time (see the previous link for a more full…
Continuing our countdown to Newton's birthday, let's acknowledge the contributions of one of his contemporaries and rivals with today's equation: This is, of course, Hooke's Law for a spring, which
he famously published in 1660: ceiiinosssttuv Clears everything right up, doesn't it? OK, maybe…
We're holding a rock, and we drop it. What happens? There's lots of methods for treating this problem. We've done it with Newton's laws of force, and we've done that in more than one way. We've done
the Lagrangian formulation in terms of minimizing the classical action. I don't think we've done…
I believe we have a Super Bowl coming up. Or, if the NFL is so picky about the use of their trademarks, I believe we have a "Big Game" coming up. As a native south Louisianian, I'm for the eternally
long-suffering Saints, who in all their years have never even been to a Super Bowl. That…
omega = sqrt (k/m)
if k is negative, this becomes imaginary. Which makes total sense. You'd expect the system to zip away exponentially, and at a guess you wind up with the hyperbolic sine.
Depending on your initial conditions, yes, hyperbolic sine would be one of the possibilities. In general your solution would be A exp(γt) + B exp(-γt) where γ = iω. This gives you a hyperbolic sine
if A = -B and a hyperbolic cosine if A = B.
For that matter, you can also write the solution of the original problem as C exp(iωt) + D exp(-iωt), and there are situations where this form is more convenient. Here C and D are complex numbers,
and if you want the solution to be purely real they will be complex conjugates of one another. You get the cosine term out of the real part and the sine term out of the imaginary part.
Would this be something for John Wilkins's "Basic Concepts in Science" collection? I haven't checked whether he has one on the harmonic oscillator. His site, after some time at scienceblogs, is at
But sin(0) = 1 and cos(0) = 1, so we see that x0 = A
IANAMOAP (I am not a mathematician or a physicist :-) but I thought that sin(0) was 0?
I love the "guided math" posts, incidentally. They won't garner as much comment traffic, because it's harder to come up with something cogent to write about them, but I probably speak for many when I
say that we appreciate a little more signal in the blogosphere noise.
Yep, sin(0) = 0. I generally make one or two egregious typos when I do this kind of post, but among the things I'm thankful for this year are readers who catch them!
Thanks for the kind words, Winawer. Though my main focus is curious readers, it's also nice to know that the "signal" might also help out lost students using Google for some guidance.
One interesting aspect of this result is that it takes functions (sin and cos) which are usually introduced in trigonometry and shows that they are fundamental in dynamics. Experienced mathematicians
and physicists are so used to this that we forget how remarkable it is that concepts introduced in the study of the static geometry of triangles are directly applicable to objects bouncing on springs
or waves travelling through the air.
As a high school Physics teacher I regularly make the connections between the lab work we do where we generate a variety of graphs that are the functions they see in their math classes. The first
time they graph the x vs t data for an object accelerated by gravity and they get a parabola it is eye opening. Having also taught math I have an appreciation for the math and the fact that they
often learn the functions and processes devoid of any physical context. Those who are taking a first year calculus course very quickly make the connection between the first derivative of the
displacement equation and the velocity equation. Since we only deal with situations of constant velocity in the algebra based course they see that the second derivative yields a constant and it all
falls into place.
I continually emphasize that they learn the math concepts in their courses because those functions describe how nature operates. When we analyze the simple harmonic oscillator in the form of a spring
and they see the d, v and a functions in real time they can make the connection between a sine curve and its first and second derivatives.
Thank you for this blog. I regularly direct my advanced students to your posts for explanations that are more cogent than what they get in their text and from me. Keep it up and thanks. For my calc
based students this will be a link I send them!
One reason why the SHO is so ubiquitous in physics is because it has a quadratic potential. A Taylor expansion of any potential around a stable equilibrium will look like a quadratic potential to
lowest order. Thus, a small perturbation of any system from a stable equilibrium will oscillate harmonically.
In statistical and quantum mechanics, the SHO is also important because its kinetic and potential energy are both quadratic (in momentum and position, respectively), and this leads to simple Gaussian
statistics (p(x) ~ exp(-x^2)).
Great plan.
I like the idea of doing the simple problem multiple ways, but don't overlook the interesting variations on the simple problem. One, when you add damping without a driving term, is most amenable to
solution with the method mentioned by Eric in the second paragraph @2. Another, where you keep the cubic term in the small angle expansion for the pendulum version of this problem, leads to an
anharmonic solution.
(related to : CCPhysicist's comment) Another fascinating thing about SHM is how it becomes like an "onion" when you make the model a bit more realistic (nonlinear spring). of course then it's not
"simple" anymore.
Peeling the onion more and more layers is fascinating - you can use this to understand the true nature of "natural" sound (ever heard a perfect middle-C sine-wave? It's unnerving and very annoying.
Hearing a musical instrument playing that same principal frequency is a whole different experience). It's all in
a) the shape of the waves (a violin doesn't make sine-waves), and
b) the subtle variations in frequency about the mean frequency.
c) the "strange attractor" nature of naturally-produced sound
Yes - SHM is critically important as the starting point, and can lead to many (re)discoveries for the scientifically-minded.
Discovering this stuff for students can be as simple as
- downloading Audacity (open-source)
recording a musical instrument playing (all it takes is a microphone which most laptops have built in nowdays)
- zooming into the wave-form to amaze all who may never have done this before
- generating a regular sine-wave of same frequency (trial and error, or just use the principal frequency you can get a frequency-spectrum plot of the natural sound-wave)
Play these sounds one after the other and see who likes the sine-wave.
I have found that this kind of "show me" science can light fires inside of students who begin asking questions and searching for answers.
Solving the nonlinear oscillator equations has a somewhat higher entry-barrier, but there are ways of doing this with things like Matlab and so on.
If anyone would like I'd be happy to help with some instructions, screenshots etc. on how to do the sound-wave investigation, and provide some references and sample-solutions data for the the
nonlinear equation-solution.
The email address above is real but if you want to get my attention put the word SHM (no spaces or periods) in the subject-line.
I got overzealously excited about SHM and I skimmed way past the end of Matt's article, and only on re-reading do I realize that Matt is already promising to delve into the Aladdin's cave of SHM and
where it leads.
Matt - sorry - I didn't mean to hijack your post and planned curriculum.
90% of physics is the harmonic oscillator. the other half is a particle in a box.
@Paul Murray
The negative sign in mx = -kx indicates that the spring force is a restoring force (one which acts in a direction opposite of displacement) rather than a constant with a value less than zero. I could
be mistaken, but I believe the negative is ommitted in the equation omega = sqrt(k/x)
Thanks for your blog! It seems like every time I check it you directly address something I recently learned in mechanics. Keep being awesome and informative :)
As a frustrated university physics student, I am wondering about how to solve the differential equation analytically.
All the physics textbooks that I have looked in simply "assume" that an equation of that form will be a solution, and then check to make sure that it is.
Assuming "test" solutions and then using them seems completely unscientific and arbitrary to me. this takes away the foundation of what I thought math and physics were all about!
I will keep searching on the internet, but could you explain how to solve the differential equation without simply assuming a solution?
This differential equation is in the form: d2x/dt2 +kx = 0, where k is a constant.
Using your knowledge of differential equations, solve the characteristic equation ar^2 + br + c = 0, which in this instance is r^2 = -k. r = +/- ki, where i is sqrt(-1). Let r1 = 0 + ki, r2 = 0 - ki
The general solution for a 2nd order, linear, homogeneous, differential equation is X(t) = C*e^(r1*t) + D*e^(r2*t)
In this case, r1 and r2 are complex. Use Euler's Formula now.
e^ix = cosx + isinx
e^(ki)t = coskt + i sint, e^(-ki)t = coskt - sinkt
Now really, e^r1t is e^real part * e^imaginary part, but in this case, the real part = 0, (r1 = 0 + ki). So e^r1t = (e^0)*(e^ki) = 1*e^ki.
Ok so read back up to the general solution, it was e^r1t etc., except multiplied by constants C and D. So X(t) = C(coskt + isinkt) + D(cos-kt + isin-kt) = C(coskt + isinkt) + D(coskt - isinkt) (as
cos-A = cosA, sin-A = -sinA).
Rearrange to get (C+D)coskt + ((C-D)i)sinkt
So X(t) = Acoskt + Bsinkt, for new constants A and B.
Note that k here is not the same k as the spring constant, I just picked it randomly, but obviously couldn't have picked a worse letter!
|
{"url":"https://scienceblogs.com/builtonfacts/2009/11/26/simple-harmonic-oscillator-1","timestamp":"2024-11-11T20:39:52Z","content_type":"text/html","content_length":"72769","record_id":"<urn:uuid:ba6c5cb5-d586-429d-b2c1-706c095ece28>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00743.warc.gz"}
|
If your uncle borrows $51,000 from the bank at 9 percent
interest over the seven-year life...
If your uncle borrows $51,000 from the bank at 9 percent interest over the seven-year life...
If your uncle borrows $51,000 from the bank at 9 percent interest over the seven-year life of the loan. Use Appendix D for an approximate answer but calculate your final answer using the formula and
financial calculator methods.
a. What equal annual payments must be made to discharge the loan, plus pay the bank its required rate of interest? (Do not round intermediate calculations. Round your final answer to 2 decimal
Annual payments $10,133.52
b. How much of his first payment will be applied to interest? To principal? (Do not round intermediate calculations. Round your final answers to 2 decimal places.)
First Payment Interest $5,543.52 Principal$45,456.48
c. How much of his second payment will be applied to each? (Do not round intermediate calculations. Round your final answers to 2 decimal places.)
│ │Second Payment │
│Interest │$6,042.44 │
│Principal │$39,414.04 │
|
{"url":"https://justaaa.com/finance/62209-if-your-uncle-borrows-51000-from-the-bank-at-9","timestamp":"2024-11-06T08:50:55Z","content_type":"text/html","content_length":"43259","record_id":"<urn:uuid:cea84ebb-135b-4e12-87f3-becf4e63586f>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00559.warc.gz"}
|
How does lisa palette work?
The function lisa_palette allows users to call and modify palettes by adjusting the parameters n and type where n represents the number of colors and type represents discrete or continuous. So how
does this work? Under the hood, grDevices::colorRampPalette() is doing all the work, see here. Unfortunately, I am unable to dive into the details of how colorRampPalette() works but we can at least
see how this works visually. The palette we will use is AndyWarhol_3, Andy Warhol’s Mick Jagger.
To compare how things change with lisa_palette we can create a list of 10 palettes:
1. 5 discrete palettes with n = 1:5
2. 5 continuous palettes with n = 1:5
Note: You will not be able to reproduce this plot with the released version from CRAN, install the latest from GitHub if you want to reproduce. The CRAN version doesn’t allow you to modify the plot
x <- lapply(1:5, function(x) structure(lisa_palette("AndyWarhol_3", n = x, "continuous"), name = paste0(x, ", continuous")))
y <- lapply(1:5, function(x) structure(lisa_palette("AndyWarhol_3", n = x, "discrete"), name = paste0(x, ", discrete")))
par(mfrow = c(2, 5))
lapply(c(x, y), plot)
The behavior for discrete palettes is pretty straight forward, it just picks 1:n colors from the palette vector, for example:
#> [1] "#a99364" "#da95aa" "#f4f0e4"
lisa_palette("AndyWarhol_3", n = 3, type = "discrete")
#> * Work: Mick Jagger
#> * Author: AndyWarhol_3
#> * Colors: #a99364 #da95aa #f4f0e4
If you ask for more than 5 colors palettes, it’ll throw an error because only 5 exist.
lisa_palette("AndyWarhol_3", n = 6, type = "discrete")
#> Error in lisa_palette("AndyWarhol_3", n = 6, type = "discrete"): Number of requested colors greater than what palette can offer
The behavior for continuous palettes is a bit different, it tries to interpolate a set of colors to create a new palette. It does this with colorRampPalette() from the grDevices package:
Which is equivalent to:
ramp <- colorRamp(lisa$AndyWarhol_3)
x <- ramp(seq.int(0, 1, length.out = 3))
if (ncol(x) == 4L) {
rgb(x[, 1L], x[, 2L], x[, 3L], x[, 4L], maxColorValue = 255)
} else {
rgb(x[, 1L], x[, 2L], x[, 3L], maxColorValue = 255)
#> [1] "#A99364" "#F4F0E4" "#C2DDB2"
Where the code chunk above is the source code for colorRampPalette() with a few changes to the formatting, see here for the true source code.
Having said all of this, in order to truly understand how this works, you would need to analyze that chunk of code and any other chunks that it depends on. Beware, from this point on it gets
complicated (for me at least) and I don’t think I could accurately articulate what’s going on behind the scenes, so I wont.
In short, colorRampPalette() depends on a secondary function colorRamp() which depends on convertColor() and so on, see the history of these functions here and the initial commit here by Thomas
Lumley in 2004. Whats interesting is the link Thomas provided in the source code which seems to be where a lot of the math involved in color interpolation comes from. For the layman like myself, I
will just embrace the magic behind color interpolation.
|
{"url":"https://pbil.univ-lyon1.fr/CRAN/web/packages/lisa/vignettes/lisa-palette.html","timestamp":"2024-11-13T08:35:17Z","content_type":"text/html","content_length":"53684","record_id":"<urn:uuid:2c4a21b6-7065-44ac-9e16-dbbee32595f3>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00694.warc.gz"}
|
Hands-on lab on error mitigation with Mitiq.
Hands-on lab on error mitigation with Mitiq.#
This is a hands-on notebook created for the SQMS/GGI 2022 Summer School on Quantum Simulation of Field Theories.
It is a guided tutorial on error mitigation with Mitiq and is focused on the zero-noise extrapolation (ZNE) technique. As this is intended to be a hands-on exercise, the solutions to the examples are
linked at the end of the notebook.
Useful links :
Checking Python packages are installed correctly#
This notebook was tested with Mitiq v0.20.0 and qiskit v0.39.0. It probably works with other versions too. Moreover, with minor changes, it can be adapted to quantum libraries that are different from
Qiskit: Cirq, Braket, PyQuil, etc..
If you need to install Mitiq and/or Qiskit, you can uncomment and run the following cells.
# !pip install mitiq==0.20.0
# !pip install qiskit==0.39.0
You can check your locally installed version of Mitiq and of the associated frontend libraries by running the next cell.
from mitiq import about
Mitiq: A Python toolkit for implementing error mitigation on quantum computers
Authored by: Mitiq team, 2020 & later (https://github.com/unitaryfund/mitiq)
Mitiq Version: 0.41.0
Core Dependencies
Cirq Version: 1.4.0
NumPy Version: 1.26.4
SciPy Version: 1.14.1
Optional Dependencies
PyQuil Version: 3.5.4
Qiskit Version: Not installed
Braket Version: 1.69.1
Python Version: 3.12.3
Platform Info: Linux (x86_64)
Computing a quantum expectation value without error mitigation#
Define the circuit of interest#
For example, we define a circuit \(U\) that prepares the GHZ state for \(n\) qubits.
\[ U |00...0\rangle = \frac{|00...0\rangle + |11...1\rangle}{\sqrt{2}} \]
This can be done by manually defining a Qiskit circuit or by calling the Mitiq function mitiq.benchmarks.generate_ghz_circuit().
from mitiq.benchmarks import generate_ghz_circuit
n_qubits = 7
circuit = generate_ghz_circuit(n_qubits=n_qubits, return_type="qiskit")
print("GHZ circuit:")
GHZ circuit:
q_0: ┤ H ├──■───────────────────────────
q_1: ─────┤ X ├──■──────────────────────
q_2: ──────────┤ X ├──■─────────────────
q_3: ───────────────┤ X ├──■────────────
q_4: ────────────────────┤ X ├──■───────
q_5: ─────────────────────────┤ X ├──■──
q_6: ──────────────────────────────┤ X ├
Let us define the Hermitian observable:
\[ A = |00...0\rangle\langle 00...0| + |11...1\rangle\langle 11...1|.\]
In the absence of noise, the expectation value of \(A\) is equal to 1:
\[{\rm tr}(\rho_{\rm} A)= \langle 00...0| U^\dagger A U |00...0\rangle= \frac{1}{2} + \frac{1}{2}=1.\]
In practice this means that, when measuring the state in the computational basis, we can only obtain either the bitstring \(00\dots 0\) or the biststring \(11\dots 1\).
In the presence of noise instead, the expectation value of the same observable \(A\) will be smaller. Let’s verify this fact, before applying any error mitigation.
Run the circuit with a noiseless backend and with a noisy backend#
Hint: As a noiseless backend you can use the AerSimulator class. As a noisy backend you can use a fake (simulated) device as shown here.
from qiskit import QuantumCircuit
from qiskit_aer import AerSimulator
from qiskit.visualization import plot_histogram
from qiskit import transpile
from qiskit_ibm_runtime.fake_provider import FakeJakartaV2 as FakeJakarta # Fake (simulated) QPUs
# Number of measurements
shots = 10 ** 5
We first execute the circuit on an ideal noiseless simulator.
ideal_backend = AerSimulator()
# Append measurement gates
circuit_to_run = circuit.copy()
# TODO: Run circuit_to_run on the ideal backend and get the ideal counts
plot_histogram(ideal_counts, title='Counts for an ideal GHZ state')
We now execute the same circuit on a noisy backend (a classical emulator of a real QPU)
noisy_backend = FakeJakarta() # QPU emulator
# Compile the circuit into the native gates of the backend
compiled_circuit = transpile(circuit_to_run, noisy_backend)
# Run the simulation on the noisy backend
# TODO: Run circuit_to_run on the noisy backend and get the noisy counts
plot_histogram(noisy_counts, title='Counts for a noisy GHZ state', figsize=(15, 5))
ideal_expectation_value = # TODO: get <A> from ideal_counts
print(f"The ideal expectation value is <A> = {ideal_expectation_value}")
noisy_expectation_value = # TODO: get <A> from noisy_counts
print(f"The noisy expectation value is <A> = {noisy_expectation_value}")
Apply zero-noise extrapolation with Mitiq#
Before using Mitiq we need wrap the previous code into a function that takes as input a circuit and returns the noisy expectation value of the observable \(A\). This function will be used by Mitiq as
a black box during the error mitigation process.
def execute(compiled_circuit):
"""Executes the input circuits and returns the expectation value of A=|00..0><00..0| + |11..1><11..1|."""
print("Executing a circuit of depth:", compiled_circuit.depth())
# TODO: cope and paste the instructions that we previously used to obtain noisy <A>.
return noisy_expectation_value
Let us check if the function works as expeted.
print(f"The noisy expectation value is <A> = {execute(compiled_circuit)}")
We can now apply zero-noise extrapolation with Mitiq. Without advanced options, this requires a single line of code.
from mitiq import zne
zne_value = zne.execute_with_zne(
circuit= # TODO... docs: https://mitiq.readthedocs.io/en/stable/apidoc.html#module-mitiq.zne.zne
executor= # TODO...
print(f"The error mitigated expectation value is <A> = {zne_value}")
Note: As you can see from the printed output, Mitiq calls the execute function multiple times (3 in this case) to evaluate circuits of different depths in order to extrapolate the ideal result.
Let us compare the absolute estimation error obtained with and without Mitiq.
print(f"Error without Mitiq: {abs(ideal_expectation_value - noisy_expectation_value)}")
print(f"Error with Mitiq: {abs(ideal_expectation_value - zne_value)}")
Explicitly selecting the noise-scaling method and the extrapolation method#
from mitiq import zne
# Select a noise scaling method
folding_function = zne.scaling.fold_global
# Select an inference method
factory = zne.inference.RichardsonFactory(scale_factors = [1.0, 2.0, 3.0])
zne_value = zne.execute_with_zne(
# TODO: pass the "folding_function" and the "factory" as arguments.
# See docs: https://mitiq.readthedocs.io/en/stable/apidoc.html#module-mitiq.zne.zne
print(f"The error mitigated expectation value is <A> = {zne_value}")
What happens behind the scenes? A low-level application of ZNE#
In Mitiq one can indirectly amplify noise by intentionally increasing the depth of the circuit in different ways.
For example, the function zne.scaling.fold_gates_at_random() applies transformation \(G \rightarrow G G^\dagger G\) to each gate of the circuit (or to a random subset of gates).
STEP 1: Noise-scaled expectation values are evaluated via gate-level “unitary folding” transformations#
locally_folded_circuit = # apply fold_gates_at_random() to "circuit" with scale factor of 3.
# Link to docs: https://mitiq.readthedocs.io/en/stable/apidoc.html#mitiq.zne.scaling.folding.fold_gates_at_random
print("Locally folded GHZ circuit:")
Note: To get a simple visualization, we did’t apply the preliminary circuit transpilation that we used in the previous section.
Alternatively, the function zne.scaling.fold_global() applies the transformation \(U \rightarrow U U^\dagger U\) to the full circuit.
globally_folded_circuit = # apply fold_global() to "circuit" with scale factor of 3.
# Link to docs: https://mitiq.readthedocs.io/en/stable/apidoc.html#mitiq.zne.scaling.folding.fold_global
print("Globally folded GHZ circuit:")
In both cases, the results are longer circuits which are more sensitive to noise. Those circuits can be used to evaluate noise scaled expectation values.
For example, let’s use global folding to evaluate a list of noise scaled expectation values.
scale_factors = [1.0, 2.0, 3.0]
# It is usually better apply unitary folding to the compiled circuit
noise_scaled_circuits = [zne.scaling.fold_global(compiled_circuit, s) for s in scale_factors]
# We run all the noise scaled circuits on the noisy backend
noise_scaled_vals = [execute(c) for c in noise_scaled_circuits]
print("Noise-scaled expectation values:", noise_scaled_vals)
STEP 2: Inference of the ideal result via zero-noise extrapolation#
Given the list of noise scaled expectation values, one can extrapolate the zero-noise limit. This is the final classical post-processing step.
# Initialize a Richardson extrapolation object
richardson_factory = zne.RichardsonFactory(scale_factors)
# Load the previously measured data
for s, val in zip(scale_factors, noise_scaled_vals):
richardson_factory.push({"scale_factor": s}, val)
print("The Richardson zero-noise extrapolation is:", richardson_factory.reduce())
_ = richardson_factory.plot_fit()
# Initialize a linear extrapolation object
linear_factory = # TODO... see docs: https://mitiq.readthedocs.io/en/stable/apidoc.html#mitiq.zne.inference.LinearFactory
# Load the previously measured data
for s, val in zip(scale_factors, noise_scaled_vals):
linear_factory.push({"scale_factor": s}, val)
print("The linear zero-noise extrapolation is", linear_factory.reduce())
_ = linear_factory.plot_fit()
Note: We evaluated two different extrapolations without measuring the system twice. This is possible since the final extrapolation step is simply a classical post-processing of the same measured
The solutions to this notebook are available here .
|
{"url":"https://mitiq.readthedocs.io/en/stable/examples/ggi_summer_school_unsolved.html","timestamp":"2024-11-11T09:38:54Z","content_type":"text/html","content_length":"76169","record_id":"<urn:uuid:72a01135-e90f-4dcd-8b71-57eb6d41153f>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00833.warc.gz"}
|
Combining Independent Voltage Sources in Series - Engineering TutorialCombining Independent Voltage Sources in Series
Combining Independent Sources
An inspection of the KVL equations for a series circuit shows that the order in which elements are placed in a series circuit makes no difference. An inspection of the KCL equations for a
parallel circuit shows that the order in which elements are placed in a parallel circuit makes no difference. We can use these facts to simplify voltage sources in series and current sources in
Combining Independent Voltage Sources in Series
It is not possible to combine independent voltage sources in parallel, since this would violate KVL. However, consider the series connection of two ideal voltage sources shown in (a) below:
From KVL we know that v = v1 + v2 , and by the definition of an ideal voltage source, this must be the voltage between nodes a and b, regardless of what is connected to them. Thus, the series
connection of two ideal voltage sources is equivalent to a single independent voltage source given by:
Clearly, the obvious generalization to N voltage sources in series holds.
In a previous example we determined the current i in the one-loop circuit shown below:
By rearranging the order in this one loop circuit (of course this does not affect i), we obtain the circuit shown below:
We can now combine the series independent voltage sources and the series resistors into single equivalent elements:
By Ohm’s Law:
i = – 24/12 = -2 A
Engineering Tutorial Keywords:
• combining voltage sources
|
{"url":"https://engineeringtutorial.com/combining-independent-voltage-sources-series/","timestamp":"2024-11-07T19:12:56Z","content_type":"text/html","content_length":"72113","record_id":"<urn:uuid:f5216bf0-0993-4bb9-8ced-a980c4a41d23>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00605.warc.gz"}
|
Documentation with a gradientCurve the ascent and descent of a track can be described.
Subschema infrastructure
Parents^* gradientCurves
Children areaLocation (0..*), beginsInGeometryPoint (0..1), endsInGeometryPoint (0..1), gmlLocation (0..*), isValid (0..*), linearLocation (0..*), name (0..*), networkLocation (0..*),
spotLocation (0..*)
• curveType: type of vertical curve, e.g. arc or straight (obligatory; xs:string)
Possible values:
□ arc: curve, that link two (e.g. straight) elements of gradient profile of a railway track, with a constant radius that is not infinite
□ mixed: an aggregated curve with arcs and straight parts
□ straight: curve with constant infinite radius (zero curvature),
• deltaGradient: change of gradient of the gradient curve in per mille;
use this attribute if the gradient value of the gradient curve is not constant, but changing;
the delta gradient shall be calculated as difference of gradient at the end and gradient at the beginning of the gradient curve (optional; xs:decimal),
• gradient: constant gradient of the gradient curve in per mille;
positive values indicate an upward slope (rise), negative values indicate a downward slope (fall) (optional; xs:decimal),
• length: length of the gradient curve in metres;
use this attribute in particular to define the arc length (optional; xs:decimal),
• radius: radius of the gradient curve in metres;
use negative values to describe the arc curve of a valley and use positive values to describe the arc curve of a hill (optional; xs:decimal),
• id: the identifier of the object; this can be either of type xs:ID or UUID (obligatory; xs:ID); compare: Dev:Identities
Elements may have different parent elements. As a consequence they may be used in different contexts.
Please, consider this as well as a user of this wiki as when developing this documentation further.
Aspects that are only relevant with respect to one of several parents should be explained exclusively in the documentation of the respective parent element.
Documentation vertical alignment of railway track
Subschema infrastructure
Parents^* gradientCurves
Children areaLocation (0..*), beginsInGeometryPoint (0..1), endsInGeometryPoint (0..1), gmlLocations (0..*), isValid (0..*), linearLocation (0..*), name (0..*), networkLocation (0..*),
spotLocation (0..*)
• curveType: type of vertical curve, e.g. arc or straight (obligatory; xs:string)
Possible values:
□ arc: curve, that link two (e.g. straight) elements of gradient profile of a railway track, with a constant radius that is not infinite
□ mixed: an aggregated curve with arcs and straight parts
□ straight: curve with constant infinite radius (zero curvature),
• deltaGradient: change of gradient of the gradient curve in per mille;
use this attribute if the gradient value of the gradient curve is not constant, but changing;
the delta gradient shall be calculated as difference of gradient at the end and gradient at the beginning of the gradient curve (optional; xs:decimal),
• gradient: constant gradient of the gradient curve in per mille;
positive values indicate an upward slope (rise), negative values indicate a downward slope (fall) (optional; xs:decimal),
• length: length of the gradient curve in metres;
use this attribute in particular to define the arc length (optional; xs:decimal),
• radius: radius of the gradient curve in metres;
use negative values to describe the arc curve of a valley and use positive values to describe the arc curve of a hill (optional; xs:decimal),
• id: the identifier of the object; this can be either of type xs:ID or UUID (obligatory; xs:string; patterns: (urn:uuid:)?[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]
{12}|{[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12}}); compare: Dev:Identities
Elements may have different parent elements. As a consequence they may be used in different contexts.
Please, consider this as well as a user of this wiki as when developing this documentation further.
Aspects that are only relevant with respect to one of several parents should be explained exclusively in the documentation of the respective parent element.
Documentation This element is not documented in the schema!
Subschema infrastructure
Parents^* gradientCurves
Children any (0..*), areaLocation (0..*), beginsInGeometryPoint (0..1), endsInGeometryPoint (0..1), gmlLocations (0..*), isValid (0..*), linearLocation (0..*), name (0..*), networkLocation
(0..*), spotLocation (0..*)
• curveType: type of vertical curve, e.g. arc or straight (obligatory; xs:string)
Possible values:
□ mixed: an aggregated curve with arcs and straight parts
□ straight: curve with constant infinite radius (zero curvature)
□ arc: curve with a constant radius that is not infinite,
• gradient: constant gradient of the gradient curve in per million;
positive values indicate an upward slope (rise), negative values indicate a downward slope (fall) (optional; xs:decimal),
• deltaGradient: change of gradient of the gradient curve in per million;
use this attribute if the gradient value of the gradient curve is not constant, but changing; the delta gradient shall be calculated as difference of gradient at the end and gradient at the beginning
of the gradient curve (optional; xs:decimal),
• radius: radius of the gradient curve in metres;
use negative values to describe the arc curve of a valley and use positive values to describe the arc curve of a hill (optional; xs:decimal),
• length: length of the gradient curve in metres;
use this attribute in particular to define the arc length (optional; xs:decimal),
• id: the identifier of the object; this can be either of type xs:ID or UUID (obligatory; xs:ID; patterns: (urn:uuid:)?[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12}|
{[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12}}); compare: Dev:Identities
Elements may have different parent elements. As a consequence they may be used in different contexts.
Please, consider this as well as a user of this wiki as when developing this documentation further.
Aspects that are only relevant with respect to one of several parents should be explained exclusively in the documentation of the respective parent element.
Changes 3.1→3.2
There exists an overview of all changes between railML^® 3.1 and railML^® 3.2 on page Dev:Changes/3.2.
The element documentation has been changed.
The children have been changed.
The attributes have been changed.
Changes 3.2→3.3
There exists an overview of all changes between railML^® 3.2 and railML^® 3.3 on page Dev:Changes/3.3.
The element documentation has been changed.
The children have been changed.
The attributes have been changed.
│ Proposed Semantic Constraint "IS:016": │
│ @gradient should not be used if @curveType "mixed". Instead of @gradient, @deltaGradient should be used.│
│ │
│ Compare #xxx │
│ │
│ │
│ Proposed on February 26^th 2021 │
│ Discuss this semantic constraint │
│ Please, recognize our guidelines on semantic constraints │
Best Practice / Examples
Additional Information
Open Issues
|
{"url":"https://wiki3.railml.org/index.php?title=IS:gradientCurve&oldid=9420","timestamp":"2024-11-09T06:31:32Z","content_type":"text/html","content_length":"41756","record_id":"<urn:uuid:736bcbd0-e2fc-45c3-929e-f799e3976dda>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00086.warc.gz"}
|