content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Multiplication 5 Times Table Worksheet
Math, especially multiplication, develops the foundation of numerous academic techniques and real-world applications. Yet, for lots of learners, mastering multiplication can pose a difficulty. To
address this obstacle, educators and moms and dads have welcomed a powerful device: Multiplication 5 Times Table Worksheet.
Intro to Multiplication 5 Times Table Worksheet
Multiplication 5 Times Table Worksheet
Multiplication 5 Times Table Worksheet -
5 times table worksheets 6 times table worksheets 7 times table worksheets 8 times table worksheets 9 times table worksheets You can also use the worksheet generator to create your own multiplication
facts worksheets which you can then print or forward The tables worksheets are ideal for in the 3th grade Menu Menu
The 5 times table is usually the 4th times table students will learn This following the tables of 1 2 and 10 The 5 times table can be easily remembered by adding 5 every time These free 5
multiplication facts table worksheets for printing or downloading in PDF format are specially aimed at primary school students You can also make a
Value of Multiplication Technique Recognizing multiplication is essential, laying a solid foundation for sophisticated mathematical principles. Multiplication 5 Times Table Worksheet offer structured
and targeted technique, promoting a deeper comprehension of this essential math operation.
Advancement of Multiplication 5 Times Table Worksheet
23 INFO MULTIPLICATION 5 TIMES TABLE WORKSHEET HD PDF PRINTABLE DOWNLOAD MultiplicationTable
23 INFO MULTIPLICATION 5 TIMES TABLE WORKSHEET HD PDF PRINTABLE DOWNLOAD MultiplicationTable
5 times table worksheets PDF Multiplying by 5 activities Download Free 5 times table worksheets Easily develop math skills by downloading freely our 5 times table worksheets pdf In order to have an
exciting moment in multiplying by 5 activities simply stick to a particular skill that will provide correct answers in all your x5 multiplication sentences
8 Free 5 Times Tables Worksheets 35 Free Pages May 26 2022 These free 5 times table worksheet will help to visualize and understand multiplication and number systems 1st Grade Students will learn
basic multiplication methods and can improve their basic math skills with our free printable free 5 times table worksheet Table of Contents show
From typical pen-and-paper exercises to digitized interactive styles, Multiplication 5 Times Table Worksheet have advanced, satisfying diverse discovering designs and choices.
Kinds Of Multiplication 5 Times Table Worksheet
Fundamental Multiplication Sheets Basic exercises focusing on multiplication tables, assisting learners construct a strong arithmetic base.
Word Problem Worksheets
Real-life situations incorporated right into issues, improving crucial reasoning and application abilities.
Timed Multiplication Drills Tests developed to improve speed and precision, helping in rapid mental mathematics.
Advantages of Using Multiplication 5 Times Table Worksheet
5 Multiplication Table Worksheet 5 Times Table Worksheets
5 Multiplication Table Worksheet 5 Times Table Worksheets
Use this 5 times table worksheet to help your child become familiar with and recall their 5 times table Includes a number grid for quick reference Mixed 2 5 and 10 Times Table Multiplication Wheels
Worksheet Pack 7 Times Table Worksheet Twos Fives and Tens Picture Arrays
Print 5 times table worksheet Click on the worksheet to view it in a larger format For the 5 times table worksheet you can choose between three different sorts of exercise In the first exercise you
have to draw a line from the sum to the correct answer In the second exercise you have to enter the missing number to complete the sum correctly
Enhanced Mathematical Skills
Consistent method hones multiplication effectiveness, enhancing general mathematics abilities.
Enhanced Problem-Solving Abilities
Word problems in worksheets develop logical thinking and technique application.
Self-Paced Understanding Advantages
Worksheets accommodate private discovering rates, cultivating a comfortable and adaptable knowing atmosphere.
Exactly How to Create Engaging Multiplication 5 Times Table Worksheet
Incorporating Visuals and Colors Vibrant visuals and shades capture attention, making worksheets aesthetically appealing and involving.
Consisting Of Real-Life Circumstances
Associating multiplication to daily scenarios adds importance and practicality to workouts.
Customizing Worksheets to Various Ability Degrees Customizing worksheets based upon varying effectiveness levels makes sure inclusive knowing. Interactive and Online Multiplication Resources Digital
Multiplication Devices and Gamings Technology-based sources supply interactive knowing experiences, making multiplication engaging and satisfying. Interactive Web Sites and Apps Online platforms give
varied and easily accessible multiplication technique, supplementing typical worksheets. Personalizing Worksheets for Various Understanding Styles Visual Students Visual help and representations help
comprehension for students inclined toward aesthetic understanding. Auditory Learners Verbal multiplication issues or mnemonics accommodate learners who grasp principles with acoustic ways.
Kinesthetic Learners Hands-on activities and manipulatives sustain kinesthetic learners in recognizing multiplication. Tips for Effective Application in Understanding Uniformity in Practice Regular
practice enhances multiplication abilities, promoting retention and fluency. Balancing Repeating and Range A mix of repetitive exercises and diverse issue formats maintains rate of interest and
understanding. Providing Positive Feedback Comments aids in identifying locations of improvement, urging continued progress. Obstacles in Multiplication Method and Solutions Inspiration and
Engagement Obstacles Monotonous drills can cause disinterest; cutting-edge approaches can reignite inspiration. Conquering Fear of Math Adverse perceptions around mathematics can impede progression;
producing a favorable learning environment is crucial. Impact of Multiplication 5 Times Table Worksheet on Academic Efficiency Studies and Research Study Searchings For Research shows a favorable
connection between constant worksheet usage and boosted mathematics performance.
Multiplication 5 Times Table Worksheet become functional tools, promoting mathematical efficiency in students while suiting varied knowing designs. From basic drills to interactive on the internet
sources, these worksheets not only enhance multiplication skills yet likewise advertise essential thinking and analytical capacities.
Multiply And Match Multiplication Activity Multiply By 2 3 4 5 6 7 8 And 9 FREE
Printable Multiplication Worksheets For Grade 5 Free Printable
Check more of Multiplication 5 Times Table Worksheet below
Free 5 Times Table Worksheets Activity Shelter
5 Times Table
Multiplication 5 Times Table Worksheets 101 Activity
5 times table Worksheets PDF Multiplying By 5 Activities
multiplication Printable Worksheets 5 times table Test 1
Multiplication Facts Worksheets Guruparents
Free 5 times table worksheets at Timestables Multiplication Tables
The 5 times table is usually the 4th times table students will learn This following the tables of 1 2 and 10 The 5 times table can be easily remembered by adding 5 every time These free 5
multiplication facts table worksheets for printing or downloading in PDF format are specially aimed at primary school students You can also make a
5 Fabulous Free 5 Times Table Worksheets The Simple Homeschooler
Check Out Your Times 5 Multiplication Worksheets This first 5 times tables worksheet is an outside the box way to practice math Have the student complete the sheet with a pencil first Check it for
correct answers and incorrect responses before moving on to coloring I would suggest using erasable colored pencils for a stress free coloring
The 5 times table is usually the 4th times table students will learn This following the tables of 1 2 and 10 The 5 times table can be easily remembered by adding 5 every time These free 5
multiplication facts table worksheets for printing or downloading in PDF format are specially aimed at primary school students You can also make a
Check Out Your Times 5 Multiplication Worksheets This first 5 times tables worksheet is an outside the box way to practice math Have the student complete the sheet with a pencil first Check it for
correct answers and incorrect responses before moving on to coloring I would suggest using erasable colored pencils for a stress free coloring
5 times table Worksheets PDF Multiplying By 5 Activities
multiplication Printable Worksheets 5 times table Test 1
Multiplication Facts Worksheets Guruparents
Multiplication Times Tables Worksheets 2 3 4 5 Times Tables Four Worksheets FREE
Worksheet On Multiplication Table Of 5 Word Problems On 5 Times Table
Worksheet On Multiplication Table Of 5 Word Problems On 5 Times Table
Multiplication Tables Check MTC Worksheets
FAQs (Frequently Asked Questions).
Are Multiplication 5 Times Table Worksheet ideal for every age teams?
Yes, worksheets can be tailored to different age and ability levels, making them versatile for different learners.
How typically should students exercise using Multiplication 5 Times Table Worksheet?
Regular practice is crucial. Normal sessions, ideally a few times a week, can yield considerable improvement.
Can worksheets alone boost mathematics abilities?
Worksheets are a beneficial tool however should be supplemented with varied knowing techniques for extensive skill development.
Are there on the internet platforms supplying totally free Multiplication 5 Times Table Worksheet?
Yes, many academic internet sites offer free access to a variety of Multiplication 5 Times Table Worksheet.
Exactly how can parents sustain their youngsters's multiplication technique in your home?
Urging constant method, giving assistance, and producing a positive understanding setting are advantageous steps.
|
{"url":"https://crown-darts.com/en/multiplication-5-times-table-worksheet.html","timestamp":"2024-11-12T07:06:03Z","content_type":"text/html","content_length":"28524","record_id":"<urn:uuid:d04bac7f-9864-461d-be85-5a28d449825c>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00712.warc.gz"}
|
Composite sampling I: the Fundamental Sampling Principle
Kim H. Esbensen^a and Claas Wagner^b
^aGeological Survey of Denmark and Greenland (GEUS) and Aalborg University, Denmark. E-mail: [email protected]
^bSampling Consultant. E-mail: [email protected]
From the first five columns, the avid reader will have acquired a good understanding of the difficulties of sampling, especially how heterogeneity interacts with the sampling process, producing all
manner of detrimental effects, due to a series of identifiable sampling errors. We have presented a plethora of examples of heterogeneous lots and their varied manifestations—and stressed the
resulting difficulties. Now is the time to start addressing the very reasonable question: what can be done about all this heterogeneity? Luckily there are many actions available, all stemming from
the Theory of Sampling (TOS). Here we introduce the powerful concept of composite sampling in close relation with the Fundamental Sampling Principle (FSP). These are in fact the only two options
available at the primary sampling stage, i.e. when facing the original sampling target and are therefore of paramount importance for all sampling, at all scales, for all materials…
WHAT TO DO with all this heterogeneity?
Trying to sample a(ny) heterogeneous lot with a single sampling operation, generically termed grab sampling, is completely out of the question, for the simple reason that such a single sample (
“specimen” rather) will never be able to represent a heterogeneous material (lot) in splendid isolation by itself, except in the rarest of accidental situations (and one would never be able to know
when this was the case). This is regardless of whether the heterogeneity is visible or not. This latter point is worth emphasising because of the frequent situation of apparently visible uniformity,
see Figure 1.
But there is hope—indeed a solution is immediately available. While in Figure 2 each individual grab sample (white circles) will fail for this reason—there is much more chance for an ensemble of
such: a composite sample is defined as an ensemble of individual increments, carefully spread out so as to cover the full geometry of the lot with the express intention to be accumulated into one
aggregate sample, a composite sample.
The notion of a composite sample, subject to a few, logical demands, will be shown to be the saviour of all sampling problems and issues that otherwise would have been left unsolvable. Composite
sampling constitutes one of 10 sampling unit operations (SUO) with which to address all sampling problems (see later columns).
For composite samples, the number of increments (Q) is of course important but only if deployed in a problem-dependent fashion, indeed only if covering the lot geometry adequately. Also rather
paradoxically, if you think of traditional statistics, it is not only about the number of samples/increments, but it just as much about how these Q increments are deployed within the whole lot
volume. For the situation depicted in Figure 2, a reasonable sampling coverage is beginning to see the light as Q increases… This is a situation easily depicted for a 2-D lot.
However, the situation becomes significantly more complex regarding stationary 3-D lots (e.g. piles, silos, vessels etc.). With respect to Figure 3, it is obviously not a solution to deploy Q
increments only within a local, narrow footprint—this is getting exactly nowhere near even trying to “cover” the entire lot, see Figure 3 (left panel). What is needed is a broadening out of the
sampling plan, but not only along the lot surface—it is imperative that the coverage is also able to sample the interior of the lot (pile in the case depicted).
Enter the Fundamental Sampling Principle (FSP): all virtual increments in any lot must be susceptible to sampling and must have the same, non-zero probability of ending up in the final sample… All
potential increments that might be identified in a composite sampling plan must be amendable to practical uncompromised extraction, no exception.
This means that even the sampling taking place in the right panel in Figure 3, cannot be said to uphold the FSP!
Composite sampling is good, but in itself not a panacea; it must comply with the demands of the FSP as well. Composite sampling is a necessary condition, but it only becomes sufficient when also
obeying the FSP. FSP constitutes another of the 10 SUOs. These two governing principles apply to lots of all dimensionalities, 0-D,^† 1-D, 2-D as well as 3-D lots.
Figures 4–7 show a remarkable difference in appearance, yet they all illustrate the necessary compliance between composite sampling and the FSP.
1-D lots are not really 1-D lots like a geometric line, but lots in which one of the dimensions completely dominates the two others, Figure 6. While being 3-D lots in principle, because of TOS’
demands that any increment extracted from such a lot must cover the two other dimensions completely, this lot becomes a true 1-D in practice: it is only the heterogeneity in the dominating elongated
dimension that matters since all “transverse heterogeneity” has been successfully represented in each increment.
Figure 6 shows a powder manifestation of a 1-D lot (the lot material is power plant incineration ash which needs characterisation and hence primary samples are collected from the incinerator), but in
fact Figure 6 illustrates the procedure used in the laboratory for sub-sampling the primary samples. Here compliance with the FSP is secured through application of the operation of “bed-blending”.
All of the primary sample material is laid out in the sampling rack—in this particular case in six layers, but preferentially as many as possible, after which the sampling procedure enjoys complete
access to “everywhere” in the lot.
In this example, 7 transverse increments were extracted, but this in reality corresponds to no less than 42 increments in total, since the material was laid out in 6 beds originally, i.e. 6 beds × 7
increments. This compound composite sampling approach is called: “bed-blending stacking/thin-slice reclaiming”. It can be very effective when it is acknowledged that the total number of increments
scales with the number of beds laid down (B) times the number of full transverse cuts extracted (Q). It is worth noting that each thin-slice increment is in effect a small B-composite sample in its
own right, made up of B constituting layer-increments. These B × Q increments are demonstrably covering the entire geometric volume of the precursor lot irrespective of its form, geometry and
mass—because one has made the effort of stringing the complete lot material out in a 1-D linear manifestation (albeit folded), making compliance with FSP both easy and very effective. The combined
operation is a kind of lot dimensionality transformation (LDT), from 3-D to 1-D. LDT is another of the 10 SUOs, more of which later.
Note that this technique can be applied to any scale and is in fact often used for primary lot sampling and blending/mixing purposes of bulk material occurring in significant tonnages.
By extracting several increments at regular intervals along the elongated dimension (or at random positions), a particularly effective sampling is achieved by aggregating all increments. By this
approach the entire lot volume (the entire primary sample volume) is guaranteed to be available for sampling and this composite sampling process therefore complies entirely with the FSP. This is of
interest also for coarser fragment aggregates, which traditionally are considered as “difficult” to sample.
Figure 7 shows such a case also subjected to “bed-blending/thin-slice reclaiming” in an impromptu implementation. Note that this technique is not necessarily associated with a particular type, or
brand, of equipment—on the contrary: until this type of laboratory sampling was demonstrated (for a world-class company with a strong laboratory tradition) simple spatula-based grab sampling had been
ruling for years/decades… “because there is no other equipment available” (sic.).
Which materials, which company, which laboratory… is of absolutely no interest—the only thing that matters is that a simple, essentially no-cost solution [a piece of cardboard (folded) and a
high-walled spatula] was able to transform the world’s worst sampling procedure (grab sampling) to an unsurpassed, representative procedure (bed-blending/thin-slice reclaiming) because of
understanding and respect for TOS in general, and for the FSP in particular. Figure 7 is a role model sampling procedure improvement example.
There are many other variations on the theme of composite sampling + FSP in the world of science, technology and industry, but the present introduction should allow easy recognition. The next column
will show more examples of the versatility and effectiveness of composite sampling especially for sampling dynamic lots, i.e. moving streams of matter (process sampling).
|
{"url":"https://www.spectroscopyeurope.com/sampling/composite-sampling-i-fundamental-sampling-principle?utm_id=2141&utm_medium=image&utm_source=popular-today","timestamp":"2024-11-08T05:11:54Z","content_type":"text/html","content_length":"58434","record_id":"<urn:uuid:d8bdd6fb-75fb-4200-86c2-2a15140dba51>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00485.warc.gz"}
|
Geo, I do recognize that you are reviewing the big picture differently from 'standard' physicists…
Geo, I do recognize that you are reviewing the big picture differently from 'standard' physicists, and I am definitively interested in your perspectives and information.
Because I start from a mechanical/structural reality, I see where our words do not always match up, and that is still the confusing part for me. You are using words I do not work with. One choice is
therefore to communicate until we recognize the words the other person is using, and adjust our writing to accommodate the other.
First off, everything depends on the model first. If we start out from energy, then we end up with a different model than when we start up with matter.
Starting with energy, we end up with two forms of matter that neither are self-based, the quarks and the electrons. The quarks forming the neutrons and protons end up providing us a self-based,
linear, secondary reality, much like Zeus conquering the realm of his father, capturing yet not killing Cronus. The electrons on the other hand are distinct; they never become anything other but
(non-linear) energy, with as exception that they are pulled into the linear environment. From the perspective of Energy, the damaged energy took over the leading role, while it is with Energy that we
need to form the big-picture level (and accept that part of it is in the non-visible realm).
Starting with matter instead, we end up with scientific concepts that are real but not the essence, and yet taken as the foundation. We can see smalltime Mathematical 1s in planets and stars, even in
solar systems and galaxies. Yet there is no Mathematical 1 for all matter in the universe.
I hope you see how starting with energy or starting with matter changes the perspective about the whole. We can end up never seeing the actual big-picture level because we focus on the visible parts
and then declare that they inform us fully about the big picture.
I do understand better now why you like time dilation. You have a function for it. You need that function because particles need to be limited in their speeds. I find that interesting because I don't
need time dilation; I just need drag of some kind and then we are done.
So, I am glad you liked the silver slivers. But you are trying to still link it with time dilation. I believe you are trying to explain two systems that are happening at the same time as if they are
one system. What would happen if you just threw away the idea of time dilation? I am convinced the entire image remains intact. In short (but correct me if I am wrong), it appears that you are doing
too much and that the story is therefore simpler.
For me the discussion about c is not that interesting. Who cares if there is something faster than the speed of light? I see no importance in knowing that; it does not inform the big picture.
Did Einstein consider it a special condition where he should not have done that? That appears to be the case. But my assessment is that he tried to explain Spacetime to others and then ran himself
into the ground because he could not explain how the others should construct the correct big picture in their minds. I think he gave up and let it hang. In other words, Spacetime was a tool for him
and he could not explain this to others without getting himself sucked into the quicksand of other people's minds.
Let me mention Gödel here, because there was something weird going on. He gave us his two Incompleteness Theorems, which I declare here as showing there is no Mathematical 1, and yet he subsequently
delivered an idea about the universe as if the universe were a Mathematical 1 for real.
He suggested that the universe as a whole could spin.
This is a total undermining of his two Incompleteness Theorems (in light of their denying the existence of a Mathematical 1). He presented an idea that turned the universe into a unit of some kind.
It does not matter that the idea was a brain fart; he envisioned it in his mind, which tells me that his mind was not made up according to the two incompleteness theorems he gave us.
My conclusion is that he stumbled into his two Incompleteness Theorems for other reasons (like curiosity about systems) and did not draw the ultimate conclusion about the universe from it.
Back to Einstein, I can see how the both of them formed a rather accurate big picture, except for the big picture itself. They did not place the big picture on its own level. They continued to unify
everything they had in their minds, and did not recognize that their minds themselves had to be aligned in that same manner then as well.
Do you see it? You probably do, but allow me to dwell on this a bit.
The brain is a tool. And if we do not recognize the Santa spot inside our brain, then we will conjure a reality in which Santa lives among us all year round.
The point I am trying to make (and claim) is that Gödel and Einstein did not see where their Santa occurred inside their brains.
Gödel figured out the two Incompleteness Theorems, but his Santa prevented him from applying it to the universe as a whole. Gödel did not have his own brain in order, not-using the two Incompleteness
Theorems as guidelines.
Einstein figured out Spacetime, but his Santa prevented him from seeing how this did not mean that light as the fastest speed is the god-moment we all were waiting for. Einstein did not have his own
brain in order, not-using his Spacetime as guideline.
They did not apply their own findings to their own brains.
Yes, it is always a strange area to discuss one's brain (especially someone else's brain) and we may think we all think alike (i.e. we don't discuss it).
Geo, I hope you understand that I am still working on the big-picture model here with you in this reply. Of interest is first how we made up our own minds. It informs us so much about what we view.
Based on this, some more remarks.
When you write that "a valid idea must be true in all cases and all scales," then we have a fundamental problem. The truth is the truth only in its declared context. Blue is something specific in the
paint store, and blue is something else -specific as well- on the couch with the shrink. Never ever can we say that we captured a truth, truthful in all cases and all scales, because we have to align
the contents with the context.
The Mathematical 1 is not available, so we can't claim that a valid idea must be true in all cases and all scales because that demands the existence of a Mathematical 1.
There is no overall truth. As above, so below is a falsehood, a brain fart. We are trained in life to go for the unifying outcome as our only overarching truth (be it currency, language, religion),
but these do not exists at the actual overall level.
We will not agree on the big picture unless we agree that there are specific realms that collectively cannot be made the same as a Mathematical 1. Then, we can discuss the specific realms and their
Let me introduce the pyramid model here, and place the four specific forces as the four corner stones: weak nuclear force, strong nuclear force, electric force, and the magnetic force. Since they
were unified in the GUT, there should not be an issue to see the four expressed parts each by themselves in the larger setting.
The one force missing is the gravitational force. But it is right there, existing in the Mathematical 2 level.
Exchange the specific forces with red, blue, yellow and green as the four corner stones of the pyramid, and next follow them into the pyramid where these colors blend. We end up with a gray mixture
inside the pyramid. If we review the gray area all by itself, we do not have a pyramid, but rather we have a cone.
The secret object inside the pyramid is the cone.
And the cone is the tool for gravity. Cut slices in a cone, and you can see the motions of celestial bodies: circle, ellipse, parabola, hyperbola.
What this tells us is that gravity is the secondary outcome, the synergistic effect of the other forces. Built from but nevertheless truly distinct from the other forces.
Gray is a distinct color (some would not call it a color) that came about after the specific colors got mixed.
Once we have gray, then we have to recognize two systems: One is the color system that also contains gray; the other is the Black&White system that also contains gray. Neither system is the only
system there is. They are both distinct systems. But they do have gray in common.
If we swap gray for gravity, then we have two systems in which to consider gravity; one in which gravity is the synergistic outcome of the specific forces, and the other in which gravity is the
generic middle ground between motion and standstill.
Quick jump: Because matter took the leading role, we have an inverse in the big picture called Energy.
When we look at the materialization process, then the ensuing speed of all matter moving outwardly is the fastest speed (using the standard model in this case). It is at this level of speed that we
find additional speeds of a different character.
The issue with these additional speeds is that they do not add to the general fastest speed nor do they subtract. Matter is found in two systems; they are simply in their own reality of circling just
as well.
If we view the solar system 4.6 billion years ago, then matter would not have come together due to gravity, but due to the circling motions that the proto-solar system was involved in. The largest
dead zone found in the center helped form the sun because this is where most matter ended up collecting itself; the various smaller circles experiencing a material build-up in their own dead zones
helped form the planets. Pretty sure (in my mind that is) that all the debris in between is from fully formed proto- and mini-planets that ended up smashing into each other along the way (plus
whatever foreign matter ended up entering the solar system).
The larger point being that the single direction, the fastest speed we are involved in, is still the paramount 'force' controlling the entire setting. Yet, once we move toward the specific celestial
bodies, they are the main focus, their own formation based not on the fastest speed but on the internal speed differentiations.
Do you see the two levels I am talking about? The fastest speed is not related to the specific speeds of sun&planets. They happen at the same time, not distinctly related.
The cylinder with water and the silver slivers:
The cylinder is the same as the single direction of the fastest speed. It is not a something; it is an action, and this action informs the somethings we call matter found inside the setting.
The water is the energized setting itself floating outwardly at that fastest speed, but also circling itself. It has (at minimum) two actions.
The silver slivers are Matter and this is where the water and the silver slivers have a communal reality because there is some water inside the silver slivers as well.
The reply is too long. I wish we could meet up in person and talk, but this is second best.
Thank you for your replies.
|
{"url":"https://fred-rick.medium.com/geo-i-do-recognize-that-you-are-reviewing-the-big-picture-differently-from-standard-physicists-f30d0765677c?source=post_page-----4689c7399b64--------------------------------","timestamp":"2024-11-11T12:46:46Z","content_type":"text/html","content_length":"127436","record_id":"<urn:uuid:9a1afa7c-e4e7-409c-b22c-57e4747a4d93>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00013.warc.gz"}
|
Matlab Sine Wave | A Quick Glance of Matlab Sine Wave with Examples (2024)
Introduction to Matlab Sine Wave
Sine wave, also known as a sinusoidal wave, is a mathematical expression that represents a repetitive oscillation. It is a function of time. MATLAB supports generating sin waves using the 2D plot
function. In this topic, we are going to learn about Matlab Sine Wave.
The general syntax for a sinusoidal input can be given as:
ADVERTISEMENT Popular Course in this categoryMATLAB - Specialization | 5 Course Series | 3 Mock Tests
s(t)= A(sinBt+C)
The sine wave plot for s(t) is generated using the plot() function as :
Where ‘A’ decides the peak value of the wave i.e. the amplitude.
The below code is written to generate sin wave with an amplitude of value ‘1’.
t = 0:pi/100:2*pi;
st = sin(t);
In real-time applications, the sinewave inputs are formed as
w Decides the angular frequency of the wave
Theta Decides the phase angle for the frequency
f Decides the linear frequency of the wave
Note: Angular frequency w and linear frequency f are related as
Examples of Matlab Sine Wave
Here are the following examples mention below:
Example #1
The below code is developed to generate sin wave having values for amplitude as ‘4’ and angular frequency as ‘5’.
t = 0:0.01:2;
w = 5;
a = 4;
st = a*sin(w*t);
The resultant sine wave is displayed for the time duration of 0 to 2 attaining the peak amplitude +4 in the first half cycle and -4 in the second half cycle with angular frequency 5.
Example #2
The below code is developed to generate sin wave having values for amplitude as ‘1’ and liner frequency as ‘10’.
f = 10;
t = 0:0.1:10000;
st = sin(2*3.141516*f*t);
The resultant sine wave is displayed for the time duration of 0 to 10000 attaining the peak amplitude +1 in the first half cycle and -1 in the second half cycle having linear frequency 10.
MATLAB incorporates the flexibility of customizing the sine wave graph. It can be achieved by editing the attributes for plot() function.
1. xlabel: x-axis label is generated.
2. Ylabel: y-axis label is generated.
3. Title: A title gets added to the sine wave plot
4. Axis square: It enables the user to generate the sine wave in square form.
5. Axis equal: User can create the sine wave plot with common scale factor and spaces for both the axes
6. Grid on: gridlines gets enabled for the sine wave plot
The below code is developed to generate sin wave having values for amplitude as ‘1’ and liner frequency as ‘10’.
f = 10;
t = 0:0.1:10000;
st = sin(2*3.141516*f*t);
plot(t,st), xlabel('time-axis'), ylabel('st'), title('Sine wave'),
grid on
The resultant sine wave is displayed for the time duration of 0 to 10000 attaining the peak amplitude +1 in the first half cycle and -1 in the second half cycle having linear frequency 10. The plot
is customized by inserting values for xlabel and ylabel and title of the plot.
Generating multiple sine wave plots with different pair of axes
The feature of generating the multiple numbers of sinusoidal plots with different pairs of axes in the same layout using a single command can be applied using ‘subplot’.
The below code is developed to generate 2 sin waves having values for amplitude as 5 and 10 respectively and angular frequency as 3 and 5 respectively.
t = [0:0.01:5];
st = 5*sin(3*t);
plot(x,y), xlabel('t'),ylabel('subplot 1')
st = 10*sin(5*t);
plot(x,y),xlabel('t'),ylabel('subplot 2')
The resultant graph contains two different sinusoidal plots created in the same layout. The first sinusoidal wave is generated in the first cell of the layout with an amplitude of 5 and an angular
frequency of 3. The second sinusoidal wave with an amplitude of 10 and an angular frequency of 5 is generated in the second cell of the layout.
1. Presenting multiple sine waves with a common axis
Displaying multiple sine waves sharing one set of the common axis is also supported by MATLAB. This can be achieved in a single command as shown in the example given below.
The code snippet is written to display 2 different sine waves on one common plane sharing a single pair of axes.
t = [0 :pi/10: 10];
st1 = 5*sin(t);
st2 = 10*sin(2*t+3);
plot(t,st1,t,st2,'.-'), legend('signal1', 'signal2')
The resultant graph contains one sinusoidal wave having amplitude 5 and angular frequency 1 and another sine wave having an amplitude of 10, angular frequency of 2, and a phase shift of 3.
2. Creating an area plot for sinewave function
The MATLAB 2d plot method area() can be used to represent the input sine wave with the area under the curves being filled. The curve at the nth interval of the time axis, represents the relative
share of each input element concerning the total height of the curved.
The below code generates an area type plot for the given sinusoidal wave input.
3. Creating a scatter plot for sinewave function
The MATLAB 2d plot method scatter() can be used to represent the input sine wave data points as small circles concerning the value on the ‘t’ axis.
4. Creating a stair plot for sinewave function
The MATLAB 2d plot method stairs() can be used to represent the input sine wave in the form of stairs drawn over the ‘t’ axis.
5. Generating sine wave for real-time application simulation
MATLAB is used to run simulation activities of real-time applications. Most of the signals from the applications are sinusoidal by nature. Hence generating a sine wave using MATLAB plays an important
role in the simulation feature of MATLAB.
The electrical voltage and current through a register are related as
Current i= voltage(v)/Registance(r)
The below example demonstrates the extraction of the current value from the voltage input and represents both the signals.
t = 0:pi/100:2*pi;r=10;v=20*sin(5*t);i=v./r;plot(t,v,t,i)
The generation of sine wave signals using plot function is one of the key features in MATLAB which enables it to run a simulation process for many real-time functions accurately and precisely. The
flexibility in customization of the display of sine waves is a major add-on to its applicability.
Recommended Articles
This is a guide to Matlab Sine Wave. Here we discuss the generating multiple sine wave plots with different pairs of axes along with the sample examples. You may also have a look at the following
articles to learn more –
1. Matlab Plots Marker
2. What is Simulk in Matlab
3. Matlab stem()
4. Matlab Line Style
SPSS - Specialization | 14 Course Series | 5 Mock Tests 42 of HD Videos 14 Courses Verifiable Certificate of Completion Lifetime Access4.5
MICROSOFT AZURE - Specialization | 15 Course Series | 12 Mock Tests 73 of HD Videos 15 Courses Verifiable Certificate of Completion Lifetime Access4.5
HADOOP - Specialization | 32 Course Series | 4 Mock Tests 170 of HD Videos 32 Courses Verifiable Certificate of Completion Lifetime Access4.5
INFORMATICA - Specialization | 7 Course Series 69 of HD Videos 7 Courses Verifiable Certificate of Completion Lifetime Access4.5
Primary Sidebar
");jQuery('.cal-tbl table').unwrap("
");jQuery("#mobilenav").parent("p").css("margin","0");jQuery("#mobilenav .fa-bars").click(function() {jQuery('.navbar-tog-open-close').toggleClass("leftshift",7000);jQuery("#fix-bar").addClass
("showfix-bar");/*jQuery(".content-sidebar-wrap").toggleClass("content-sidebar-wrap-bg");jQuery(".inline-pp-banner").toggleClass("inline-pp-banner-bg");jQuery(".entry-content img").toggleClass
("img-op");*/jQuery("#fix-bar").toggle();jQuery(this).toggleClass('fa fa-close fa fa-bars');});jQuery("#mobilenav .fa-close").click(function() {jQuery('.navbar-tog-open-close').toggleClass
("leftshift",7000);jQuery("#fix-bar").removeClass("showfix-bar");jQuery("#fix-bar").toggle();jQuery(this).toggleClass('fa fa-bars fa fa-close');/*jQuery(".content-sidebar-wrap").toggleClass
("content-sidebar-wrap-bg");jQuery(".inline-pp-banner").toggleClass("inline-pp-banner-bg");jQuery(".entry-content img").toggleClass("img-op");*/});});
|
{"url":"https://giside.best/article/matlab-sine-wave-a-quick-glance-of-matlab-sine-wave-with-examples","timestamp":"2024-11-04T10:37:31Z","content_type":"text/html","content_length":"82620","record_id":"<urn:uuid:4068bd2a-9ae0-47d0-b9fe-d951427c7a24>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00254.warc.gz"}
|
Tangent: Introduction to the Tangent Function in Mathematica (subsection TanInMathematica/01)
The following shows how the tangent function is realized in Mathematica. Examples of evaluating Mathematica functions applied to various numeric and exact expressions that involve the tangent
function or return it are shown. These involve numeric and symbolic calculations and plots.
|
{"url":"https://functions.wolfram.com/ElementaryFunctions/Tan/introductions/TanInMathematica/01/ShowAll.html","timestamp":"2024-11-03T17:02:47Z","content_type":"text/html","content_length":"32930","record_id":"<urn:uuid:72ed20f3-4a40-40f0-9f0a-3c8e0f98eda2>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00141.warc.gz"}
|
Number Problems | Stage 3 Maths | HK Secondary S1-S3
Number Sentences
I own $2$2 adult dogs, and they had a litter of $5$5 puppies. How many dogs do I have in total?
We know we could write this using maths:
$2$2 (dogs that I started with) + $5$5 (new cute puppies)
And, we could solve this by completing the operation:
So I have seven dogs in total.
This is an example of a number sentence. We used the numbers in the worded statement to form a mathematical statement.
We can also write expressions and equations from worded statements even if we don't have all the numbers we need.
Writing Equations
I had some pocket money saved. Then my mother gave me $\$25$$25, for cleaning the car and now I have a total of $\$62$$62.
Let's call the starting amount $p$p. (In fact if you are not told what to call it you can call it anything you like!)
The key to writing equations like this is to be able to identify key words in the sentences.
Is it addition or subtraction?
more, sum, add, join, altogether, in total, both, combined, increase are all words that would indicate the operation of addition.
left, subtract, minus, remain, decrease, use, less than, difference, take away, fewer, shorter are all words that would indicate the operation of subtraction.
Is it multiplication or division?
product of, multiplied by, times, of, double, triple, groups are all words that would indicate the operation of multiplication.
quotient of, divided by, per, into, out of , ratio of, unit price, cut up, separated , share equally, split, half, parts are all words that would indicate the operation of division.
is, are, was,were, will be, gives, yields, sold for are all words that would indicate an equals sign.
Mathematising (the idea of turning words into maths) is a bit like translating into another language! But it does get easier with practice.
Let's have a look at some.
Question 1
Firstly we need to identify the key words in the statement:
And then we write the statement using mathematical symbols:
Here are a few more to watch before you attempt the exercise.
Question 2
The sum of $x$x and $10$10 is $25$25. Construct an equation and solve for the value of $x$x.
Question 3
The product of $5$5 with the difference of $5$5 from $x$x equals $-15$−15. Construct an equation and solve for the value of $x$x.
|
{"url":"https://mathspace.co/textbooks/syllabuses/Syllabus-98/topics/Topic-1477/subtopics/Subtopic-17464/","timestamp":"2024-11-13T02:57:10Z","content_type":"text/html","content_length":"398104","record_id":"<urn:uuid:ba4f4bfd-76c7-43aa-ac16-29a185bae0c8>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00104.warc.gz"}
|
Is black hole entropy invariant?
• Thread starter yuiop
• Start date
In summary: A is the surface area of the black hole event horizon. All the other factors on the right hand side of the equation are constants.From the point of view of an observer moving relative to
the black hole the spherical event horizon appears to be an oblate spheriod with its shotest radius parallel to the relative motion. The surface area of an oblate spheriod varies in a non linear way
with respect to the contraction of one of the radii and it is certainly not invariant under Lorentz tranformation. This contradicts almost text on relativistic thermodynamics that almost universally
accept that entropy is a Lorentz invariant.Perhaps the situation can be rectified by substituting the the equation
Hawking gave the black hole entopy equation:
(1) [tex]S_{BH} = {k c^3 \over 4G \hbar} A[/tex]
where A is the surface area of the black hole event horizon. All the other factors on the right hand side of the equation are constants.
From the point of view of an observer moving relative to the black hole the spherical event horizon appears to be an oblate spheriod with its shotest radius parallel to the relative motion. The
surface area of an oblate spheriod varies in a non linear way with respect to the contraction of one of the radii and it is certainly not invariant under Lorentz tranformation. This contradicts
almost text on relativistic thermodynamics that almost universally accept that entropy is a Lorentz invariant.
Perhaps the situation can be rectified by substituting the the equation for the volume (V)of a sphere into the equation?
The equation would then be:
(2) [tex]S_{BH} = {k c^3 \over 4G \hbar} \times {3 \over R} \times V = {k c^3 \over 4G \hbar} \times {3 \over R} \times {4 \pi R_x R_y R_z \over 3}[/tex]
By assuming that the relative motion is along the x-axis and by assuming the undefined R is the radius parallel to the relative motion the equation becomes:
(3) [tex]S_{BH} = {\pi k c^3 \over G \hbar} \times { R_y R_z } = {A_{tranverse} \over 2L_p^2}[/tex]
where Ax is the transverse cross sectional area of the event volume and Lp is the Planck Area. This formulation is invariant under Lorentz transformation as all length measurements are transverse.
Since the Radii along the y and z axes remain unaltered under transformation we can substitute the values for the Schwarzschild radii :
(4) [tex]S_{BH} = {\pi k c^3 \over G \hbar} \times { 4 G^2 M^2 \over c^4 }[/tex]
which simplifies to
(5) [tex]S_{BH} = {4 G \pi k \over \hbar c} \times {M}^2 = 4 \pi k {M^2 \over M_p^2}[/tex]
where M is the mass of the black hole and Mp is the Planck mass.
Equation (5) appears not to be invariant although it can be noted that equation (1) can be expressed in similar way using Planck units as:
(6) [tex]S_{BH} = {k \over 4 } {A \over L_p^2} [/tex]
so maybe we can assume invariance is assured by the Planck constants transforming in the same way as area and mass respectively? Was the assumption that the "constants" in equation (1) really are
constants, misplaced?
kev said:
Hawking gave the black hole entopy equation:
From the point of view of an observer moving relative to the black hole the spherical event horizon appears to be an oblate spheriod with its shotest radius parallel to the relative motion. The
surface area of an oblate spheriod varies in a non linear way with respect to the contraction of one of the radii and it is certainly not invariant under Lorentz tranformation. This contradicts
almost text on relativistic thermodynamics that almost universally accept that entropy is a Lorentz invariant.
This is really interesting kev. But here's the problem: although something like "contraction" should appear in Schwarzschild spacetime, it will be determined by the Schwarzschild metric, not by the
Lorentz transformations.
The Lorentz transformations are rotations of Minkowski (flat) spacetime, and map inertial observers to inertial observers. But observers around a Schwarzschild black hole are not "inertial"; they are
in freefall in a curved spacetime. So contraction won't be determind by the Lorentz transformations.
However, Hawking radiation is just radiation, so it travels along timelike geodesics. What you might do is this:
Pick the geodesic along which you'd like to make your observation (for example, it might be an observer in a stable circular orbit). Then consider a congruence of geodesics on escape trajectories
away from the black hole, all of which intersect your observer -- these will represent your radiation. I suspect that in many situations, you have correctly predicted that the observer will *see* the
congruence to have contracted as compared to what is *seen* from the perspective of coordinate time.
However, this observer will *judge* the amount of Hawking radiation just like every other observer will judge it, using the formula you have given, since he understands the geometry of spacetime.
Another thought: Schwarzschild spacetime is asymptotically flat. So as you move arbitrarily far away from the black hole, the analysis you do above becomes more and more correct. Unfortunately, if
you are really that far away, the
change in entropy
due to contraction effect would be too small to measure.
I've always wondered why it is that black hole entropy can be *identified* with classical entropy. They of course follow very similar laws; however, they are defined completely differently. Do we
really know that our intuitions about should carry over to intuitions about the other? -- Perhaps there is good reason to do this, I simply don't know.
kev said:
Hawking gave the black hole entopy equation:
(1) [tex]S_{BH} = {k c^3 \over 4G \hbar} A[/tex]
where A is the surface area of the black hole event horizon. All the other factors on the right hand side of the equation are constants.
From the point of view of an observer moving relative to the black hole the spherical event horizon appears to be an oblate spheriod with its shotest radius parallel to the relative motion. The
surface area of an oblate spheriod varies in a non linear way with respect to the contraction of one of the radii and it is certainly not invariant under Lorentz tranformation. This contradicts
almost text on relativistic thermodynamics that almost universally accept that entropy is a Lorentz invariant.
[tex]c[/tex], [tex]G[/tex], and [tex]\hbar[/tex] are dimensionful constants which are
to have the same values within any inertial frame. [tex]S_{BH}[/tex] is a dimensionless ratio of two angular momentums which co-vary over any coordinate transformation, so it is invariant.
Was the assumption that the "constants" in equation (1) really are constants, misplaced?
These are dimensionful constants which are defined based on a system of units that is itself defined to be the same within every inertial frame, but the system of units in one inertial frame appears
time-dilated and length-contracted from the perspective of another inertial frame.
Last edited:
FAQ: Is black hole entropy invariant?
1. What is black hole entropy?
Black hole entropy refers to the amount of disorder or randomness within a black hole. It is a measure of the number of microstates that can describe the black hole's current state.
2. How is black hole entropy related to information?
Black hole entropy is directly related to the amount of information that can be contained within a black hole. As the black hole's entropy increases, so does the amount of information it can store.
3. What is the relationship between black hole entropy and the event horizon?
The event horizon of a black hole is the point of no return, where the gravitational pull is so strong that nothing, including light, can escape. The surface area of the event horizon is directly
proportional to the black hole's entropy.
4. Is black hole entropy constant?
No, black hole entropy is not constant. It can increase or decrease depending on the changes in the black hole's mass, spin, and charge.
5. Why is the question of black hole entropy invariance important?
The question of black hole entropy invariance is important because it helps us understand the fundamental laws of thermodynamics and the behavior of black holes. It also has implications for our
understanding of information and the nature of space and time within a black hole.
|
{"url":"https://www.physicsforums.com/threads/is-black-hole-entropy-invariant.230226/","timestamp":"2024-11-05T19:28:28Z","content_type":"text/html","content_length":"91760","record_id":"<urn:uuid:6f0b75dc-3633-4ff9-bba8-87d1fd3e7400>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00178.warc.gz"}
|
Institute of Mathematics of the Czech Academy of Sciences | EMS Magazine
🅭🅯 CC BY 4.0
The Institute of Mathematics of the Czech Academy of Sciences, located in the very centre of Prague with a small group of researchers working in Brno, is the leading research institution in
mathematics in the Czech Republic. The mission of the Institute is to foster fundamental research in mathematics and its applications and to provide the necessary infrastructure. In cooperation with
universities, the Institute carries out doctoral study programmes and provides training for young scientists.
The Institute was established in 1947 as the Institute for Mathematics of the Czech Academy of Sciences and Arts (Česká akademie věd a umění). The initiator and the first director was Eduard Čech. In
1953, the Institute was reorganized and incorporated into the newly established Czechoslovak Academy of Sciences. In 1993, when Czechoslovakia split, the Institute became a part of the newly
established Czech Academy of Sciences. In 2007, the Institute, together with 53 other institutes of the Czech Academy of Sciences, was transformed into a public research institution. This status
provides much broader autonomy, especially in research and personnel policy, but still under public control as the vast majority of funds come from public sources. Institutional funding forms about
60% of the resources and is provided by the Czech Academy of Sciences based on regular evaluation of research quality. About 35% of the revenue is earned through competitions for grant projects,
and 5% results from economic activities related mainly to the publication of research journals.
The Institute currently employs around 90 researchers. Almost half of them are foreigners, with more than 20 nationalities. Postdoctoral fellows and PhD students represent more than 25% of the
research staff. All researchers are hired in open competitions for 2–5 year contracts with the possibility of further extension based on a successful personal evaluation. Unfortunately, getting good
PhD students is complicated by the legislation that allows the Institute to be involved in their education only in conjunction with a university. This means that a number of students are coming from
abroad. The infrastructure is supported by 20 staff members who provide the services of the library, IT, project support, editorial office, administration and management.
The research strategy of the Institute is based on bottom-up activities that are supported, encouraged and guided by the management in close cooperation with the Board of the Institute. The
International Advisory Board is asked for advice on important decisions.
The fields of research include those connected with the best tradition of Czech mathematics as well as newly developed areas. The traditional fields are inherently connected with the founding members
and strong personalities of the Institute such as Eduard Čech (Stone–Čech compactification in topology, Čech cohomology, Čech closure operator), Jaroslav Kurzweil (Henstock–Kurzweil integral,
stability theory for ordinary differential equations), Ivo Babuška (theory of finite element method, Ladyzhenskaya–Babuška–Brezzi condition), Jindřich Nečas (regularity of generalized solutions of
elliptic equation, theory of elasto-plastic bodies in continuum mechanics), Miroslav Fiedler (Fiedler algebraic connectivity in graph theory, Fiedler vector in linear algebra and matrix theory), and
Vlastimil Pták (Pták topological vector spaces, Pták subtraction theorem and the notion of the critical exponent of iterative processes).
Following the high standards set by these distinguished personalities, research teams have been cultivating the traditional and strong mathematical disciplines while also opening new research
directions. Current research focuses on mathematical analysis (differential equations, numerical analysis, functional analysis, theory of function spaces), mathematical logic and logical foundations
of computer science, complexity theory, combinatorics, set theory, numerical linear algebra, general and algebraic topology, category theory, optimization and control theory, algebraic and
differential geometry and mathematical physics.
The research at the Institute is organized in five departments that are described in the following paragraphs.
Abstract Analysis
Originally called Department of Topology and Functional Analysis, this department represents a continuation of one of the traditionally strong research directions in the Institute. Under the
leadership of Wieslaw Kubiś, the team recently reassessed their research focus. The emphasis has shifted from the traditional topics of the theory of Banach spaces, operator theory, classical
topology and functional analysis to those areas where mathematical logic plays a significant role, even though it is not the main object of study, namely descriptive set theory, algebraic topology,
category theory, and the theory of C*-algebras. For this reason, the department has been renamed Abstract Analysis.
Several team members are currently involved in the prestigious EXPRO project of excellence funded in 2020–2024 by the Czech Science Foundation and lead by Wieslaw Kubiś. The project aims to explore
and classify generic mathematical objects appearing in the above-mentioned areas of abstract analysis.
Algebra, Geometry and Mathematical Physics
Formed in 2014 on a bottom-up initiative of several members of other teams, this department steadily grows and continuously proves to be one of the most successful within the Institute. They
investigate algebraic and differential geometry and closely related areas of mathematical physics. Their research focuses on mathematical aspects of modern theoretical physics, mathematical models
aiming at understanding the nature of matter, fields, and spacetime. Research topics include representation theory and its applications to algebraic geometry, homological algebra, algebraic topology,
applied category theory, tensor classification, mathematical aspects of string field theory, generalized theory of gravitation, and study of Einstein equations.
The team achieved excellent results in the theory of gravity, analytical solutions of Einstein equations, and modified theories of gravity. Using their conformal-to-Kundt method, Vojtěch Pravda and
Alena Pravdová with their colleagues from the Charles University identified and studied several classes of new static spherically symmetric vacuum solutions of the field equations of modified
gravity, including a new non-Schwarzschild black hole. This discovery attracted widespread attention and was even reported in the media. Martin Markl and his collaborators achieved the ultimate
result on loop homotopy algebras in closed string field theory and constructed the disconnected rational homotopy theory. In 2018, he received the Praemium Academiae award of the Czech Academy of
Sciences, connected with generous funding that allowed him to hire several talented postdocs and establish his own ambitious research group.
Evolution Differential Equations
The research of this department focuses on theoretical analysis of complex multi-field evolution processes in physics, in particular continuum mechanics and thermodynamics. Special attention is paid
to the description of interacting phenomena of different physical natures, such as biological systems, stratified or viscoelastic fluids, contact mechanics between fluids and solids or between rigid,
elastic, or elastoplastic solids, fluid diffusion in deformable porous media, electric and magnetic effects in moving solids and fluids, magnetohydrodynamics, liquid crystals, hysteresis, thermal
effects and radiation, or temperature-induced phase transitions in a large parameter range. The systems under consideration are based on physical laws of conservation of mass, momentum, energy,
balance of entropy, including also energy exchange principles between mechanical, thermal, and electromagnetic energy in multifunctional materials.
Eduard Feireisl, the principal investigator of the ERC Advanced Grant MATHEF (Mathematical Thermodynamics of Fluids), 2013–2018.
🅭🅯 CC BY 4.0
Tomáš Vejchodský, Director of the Institute, in the promotional video presenting the cooperation with the company Doosan-Bobcat EMEA, youtube.com/watch?v=_I2KN-z_fo4.
🅭🅯 CC BY 4.0
An outstanding achievement was the ERC Advanced Grant MATHEF (Mathematical Thermodynamics of Fluids) awarded to Eduard Feireisl in 2013–2018. He and his collaborators built a complete mathematical
theory describing the motion of compressible viscous heat-conducting fluids, including aspects of stochastic forcing and construction of convergent numerical schemes. The novel and original approach
to the interpretation of the principles of continuum thermodynamics in modelling heat-conducting fluid flow turned out to be a rich source of results for the general theory, as for example, the
concept of dissipative measure-valued solutions. Further essential results concerned well-posedness, regularity and stability of the Euler system and similar partial differential equations, including
the construction of a stable finite volume scheme and proof of its convergence via dissipative measure-valued solutions.
The team members are involved in the Nečas Center for Mathematical Modeling, a research platform established by the Institute, the Charles University and the Institute of Computer Science of the
Czech Academy of Sciences with the ambition of coordinating and supporting research and education activities in the theoretical and applied mathematics, particularly in the field of continuum
mechanics. They are also active in the network for industrial mathematics EU-MATHS-IN.CZ (part of the European network EU-MATHS-IN).
Mathematical Logic and Theoretical Computer Science
The research programme of this department concerns mathematical problems arising from theoretical computer science, logic, set theory, finite combinatorics, and control theory. The main topics
studied by its members include proof and computational complexity, logical foundations of arithmetic, quantum information theory, graph theory, and set theory. The problems studied have foundational
importance in themselves, and potentially also practical applications, for example in data security.
Pavel Pudlák, the principal investigator of the ERC Advanced Grant FEALORA (Feasibility, Logic and Randomness in Computational Complexity) in 2014–2018.
🅭🅯 CC BY 4.0
In the area of the logical foundations of mathematics, the team is one of the world’s leading centres of research in bounded arithmetic and proof complexity. Computational complexity is a discipline
with a short history that has only recently been recognized as an important field not only in computer science but also in mathematics. It is also due to the fact that fundamental questions in this
domain (e.g. the famous “P versus NP” problem) belong to the set of mathematical problems which resist being solved for decades. Pavel Pudlák’s group attacks these problems using methods of
mathematical logic. He believes that the reason why we cannot answer these questions is fundamental in nature, and therefore their logical aspects should be studied. The research domain in which he
and his colleagues work and have already reached important results is called proof complexity. While computational complexity deals with how difficult it is to compute something, proof complexity
asks how difficult it is to prove it.
Numerical Analysis
Following a decades-long tradition, this department investigates both theoretical and practical aspects of computational science, mainly numerical methods for partial differential equations and
numerical linear algebra, whereas classical and strong areas have been complemented with new research topics. Its members focus on questions of convergence, efficiency, and reliability of numerical
methods for partial differential equations, including matrix computations and high-performance implementations on parallel computer architectures. Members of the team led by Michal Křížek are experts
in the finite element method, saddle-point systems, preconditioning, domain decomposition methods, rounding error analysis, high-performance computing and computational fluid dynamics.
The team is involved in the Nečas Center for Mathematical Modeling and in the network for industrial mathematics EU-MATHS-IN.CZ. It has succeeded in competitions for the CPU time at large European
computers and cooperates with the IT4Innovations National Supercomputing Center of the Technical University in Ostrava.
Members of the five above-mentioned departments organize a dozen regular seminars and about the same number of international workshops and conferences. Around 150 foreign researchers visit the
Institute every year. In 2016, the Institute established Eduard Čech Distinguished Visitor Programme with the aim of significantly enhancing its creative environment by attracting highly
distinguished mathematicians for a longer period of time. One visitor is selected every year to deliver a series of lectures and to essentially develop scientific collaboration with researchers in
the Institute. The visitor is also expected to deliver the prestigious Eduard Čech Lecture for the general mathematical community.
Other activities and service to the community
Although the emphasis is on fundamental research, attention is also paid to connections with applications. The Institute is involved in the Strategy AV21 programme “Hopes and Risks of the Digital
Era” run by the Czech Academy of Sciences. The role of the Institute is to develop mathematical models for engineering applications. The Institute cooperates on a long-term basis with the Innovation
Centre of the company Doosan Bobcat EMEA, the renowned producer of compact loaders and excavators.
🅭🅯 CC BY 4.0
The Institute publishes three mathematical journals. The Czechoslovak Mathematical Journal and Mathematica Bohemica are continuations of Časopis pro pěstování mathematiky a fysiky (Journal for
Cultivation of Mathematics and Physics) established in 1872. The aim of these two journals is to publish original research papers of high scientific quality in all fields of mathematics. The third
journal, Applications of Mathematics, specializes in mathematical papers directed at applications in various branches of science.
The Institute also provides several services for the wide mathematical community and public. Its library, with almost 100,000 volumes including 35,000 monographs and 1,300 journal titles, is the
largest public mathematical library in the country. Since 1996, the Prague editorial Group cooperates with zbMATH to produce metadata and reviews of mathematical publications. Since 2009, the
Institute has been developing the Czech Digital Mathematics Library (DML-CZ, https://dml.cz) with the aim of digitizing, organizing and archiving the relevant mathematical literature published
throughout history in the Czech lands, and providing free access to metadata and full texts. DML-CZ currently includes 17 journal titles, proceedings of 8 conference series, and about 300 books. The
Institute is a member of the international consortium that has developed the European Digital Mathematics Library (EuDML, https://eudml.org).
Students during the Open House Days and the exhibition of Imaginary posters demonstrating the beauty of mathematical surfaces.
🅭🅯 CC BY 4.0
Close attention is paid to the popularization of mathematics. Public lectures in the annual Open House Days used to be attended by more than a thousand visitors, mostly high-school students. The
restrictions connected with the Covid-19 pandemic inspired us to create a webpage for students and the general public presenting various mathematical problems, popular lectures and other interesting
materials like posters celebrating the laureates of the Abel Prize.
Posters presenting the winners of the Abel Prize.
🅭🅯 CC BY 4.0
A group photo of members of the Institute at the annual bike trip, July 30, 2020.
🅭🅯 CC BY 4.0
The well-being of the Institute employees and their work-life balance is supported in various ways. There is a tradition of cultivating a friendly atmosphere and an effort to approach and comply with
individual needs of employees. The main objective of the currently running project “Institute of Mathematics CAS goes for HR Award – implementation of the professional HR management” is to improve
the stimulating and attractive work environment in the Institute and to apply for the HR Excellence in Research Award (known as the HR Award) granted by the European Commission.
To learn more about the Institute, please visit the webpage www.math.cas.cz.
Cite this article
Jiří Rákosník, Miroslav Rozložník, Institute of Mathematics of the Czech Academy of Sciences. Eur. Math. Soc. Mag. 121 (2021), pp. 40–44
DOI 10.4171/MAG/36
This open access article is published by EMS Press under a
CC BY 4.0
license, with the exception of logos and branding of the European Mathematical Society and EMS Press, and where otherwise noted.
|
{"url":"https://euromathsoc.org/magazine/articles/36","timestamp":"2024-11-04T04:59:02Z","content_type":"text/html","content_length":"143307","record_id":"<urn:uuid:0367a34d-ae14-44f9-aa09-b8636f475fc4>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00461.warc.gz"}
|
The Cognitive Psychology of Baseball!
Ah, yes, a real game (kidding, Scrabble people). If you've watched many baseball games or baseball movies, you know that one of the things that makes for a successful hitter is the ability to predict
what the next pitch will be. Is it going to be inside or outside? Will it be a fastball or a breaking ball? If you're expecting a fastball and get a slow, breaking curveball, it's unlikely you'll get
anywhere near it. So cognitive processing is an important part of being a good hitter. At least, that's what a hitting coach would tell you. And according to a 2002 paper by Rob Gray in Psychological
Science, they'd be right.
Basically, Gray had college baseball players stand in front of a screen with a simulated baseball diamond, and swing at a simulated pitch. This setup led to one of the coolest method sections ever,
if you're a baseball fan and a geek (like me):
Mounted on the end of the bat (Louisville Slugger Tee Ball bat; 63.5 cm long) was a sensor from a Fastrak (Polhemus, Colchester, Vermont) position tracker. The x, y, z position of the end of the
bat was recorded at a rate of 120 Hz.
The pitch simulation was based on that used by Bahill and Karnavas (1993). Balls were launched horizontally (i.e., 0° ) from a simulated distance of 59.5 ft (18.5 m); that is, the pitcher
released the ball 1 ft in front of the pitching rubber. The only force affecting the flight of the ball was gravity. The height of the simulated pitch at time t, Z(t), was changed according to
Z(t) = -1/2 * g * t^2,
where g is the acceleration of gravity, 32 ft/s (9.8 m/s).
I know you're supposed to include descriptions of your equipment in method sections, but I can't get over the inclusion of the fact that the bat was a Louisville Slugger. I'm sorry, I'm a baseball
Anyway, Gray included two kinds of pitches: slow and fast. The fast pitches were simulated at 85 +/- 1.5 mph, and the slow ones at 70 +/- 1.5 mph. Whether the pitch was fast or slow depended on the
pitch count. For 0-0, 1-0, 0-1, 1-1, 2-1, 2-2, and 3-2 counts (that's balls-strikes, for those of you who don't know baseball), the probabilities for fast and slow pitches were .50-50. For pitcher's
counts (0-2 and 1-2), the slow balls were more likely (0.65), and for hitters counts (2-0, 3-0, and 3-1), fast pitches were more likely (0.65). The pitch count was displayed on the screen so the
hitters could keep track. There were three different horizontal positions for the pitches: strike, outside ball and inside ball. The strikes crossed the plate at 0 +/- 1 inch from the center of the
plate, the outside balls at 12 +/- 1 inch away from the center of the plate (in the direction away from the batter, that is), and the inside balls at 12 +/- 1 inch from the center in the direction
towards the batter. Whether the pitch was a ball or a strike was randomly chosen. Each hitter took 25 swings per block for 10 blocks, with rest (a lot, I hope, 'cause 250 swings is crazy) in between.
So here's what the hitters had to predict: whether the ball would be fast or slow, and whether it would be inside, outside, or down the middle. Since certain types of pitches (e.g., slow breaking
balls) are associated with pitcher's counts, and others (fastballs, mostly) are associated with hitter's counts. Since Gray used these associations to determine pitch probabilities, the batters had
some basis for predicting pitch speeds. Since pitch locations were random, the batters just had to guess these.
In order to get a measure of the batters' accuracy, Gray assumed that they'd be trying to hit the ball at a given position (0.9 meters in front of the plate) and bat height (the lowest position
during the swing), which he got from previous research on baseball hitters. He then took a measure of temporal error, which he calculated by subtracting the time at which the bat reached its lowest
position and the time the ball was at 0.9 meters in front of the plate. Gray's prediction was that if the batters predicted the right pitch, their temporal error would be lower than if they predicted
the wrong one.
To test this prediction, Gray developed a "finite-state Markov model" to simulate batting performance with predicted pitches. I won't get into the details of the model (you can read the paper, linked
above, if you're really interested). Basically, the model predicted the next pitch based on its predictions for the previous pitches and the accuracy of those predictions, along with knowledge of
pitch count-pitch speed associations. If the model's performance was similar to that of the real batters, then this would provide evidence for the benefit of correctly predicting a pitch. Here's a
graph comparing the model's performance to one of the batter's (from Gray's Figure 2, p. 544):
The graph presents temporal error scores for the model and the batter for seven different pitch counts. The important thing to notice is that despite some small difference in absolute numbers, the
patterns for the model and batter are highly similar. As you would expect, given the pitch count-pitch speed associations described above, early counts that don't overly favor either the pitcher or
the batter (0-1, 1-0, and 1-1) produce high errors, because it's more difficult to predict the next pitch (recall that for these counts, the probability of a fast or slow pitch was 0.5-0.5). For the
hitter's counts (0-2 and 0-3), temporal errors were relatively small for both the model and the batter, reflecting the fact that they could accurately predict fast pitches on these counts. On the
other hand, error scores were relatively high for pitcher's counts (0-2 and 1-2), despite the fact that these are associated with slow pitches. This reflects the fact that batters generally don't do
well on pitcher's counts (you have to swing at anything close, whether you predicted correctly or not).
So there you have it, the strong correlation between model and batter performance suggests that predicting the next pitch correctly really is important for hitting successfully. Hitting coaches and
color commentators have been right all this time. Of course, there's a massive duh factor to this, but it's still cool to see it confirmed in a laboratory environment.
Gray, R. (2002). "Markov at the Bat": A model of cognitive processing in baseball batters. Psychological Science, 13, 543-548.
More like this
Matsuzaka looked impressive in his MLB debut. He had 10 strikeouts in 7 innings and only threw 108 pitches. I'm still not convinced he's worth $103.1 million, but the weak Kansas City lineup looked
pretty dazed and confused. Matsuzaka's genius, I think, is to create as much batter uncertainty as…
The Times has an interesting profile of Johan Santana, perhaps the most effective pitcher in baseball. What's interesting about Santana is that his secret isn't a 98 mph fastball or some wicked new
breaking ball. Rather, he strikes out batters because he denies batters the perceptual cues they rely…
A few months ago, I posted about a study showing implicit racial bias in NBA referees' calls. Now it's baseball's turn, because yesterday reports of study by Parsons et al.1 that shows analogous
results for home plate umpires began popping up all over the media. The study is pretty…
Found some Koufax footage. About halfway through this short clip he Ks Mantle, looking, and a bit later, in the dark footage toward the end, is a good strip of him throwing the devastating curve.
Note there the emphatic downward motion of his shoulder -- which brought down his hand the faster,…
Two great baseball players engaged once, it is said, in a very sophisticated Zero Sum Game of Mathematical Disinformation Theory.
The story is well-know; the analysis by Jonathan Vos Post is original.
Yogi Berra [catcher] to Hank Aaron [batter]:
"The label's on top."
[Translation: it is widely beleieved that the location of the label of the bat with respect to the bat-ball impact point affects the probability that the bat will break on contact, which is
negatively correlated with the probability of a home run, due to momentum conservation considerations; hence I offer to you that you drop your model of this at-bat and replace it by one with an
additional variable to take into account, as I hope that you will, I expect you to decline in performance by the replacement cost of
Hank Aaron [batter] vs Yogi Berra [catcher]:
"I didn't come here to hit and read at the same time."
[Translation: I'm on my way to the record number of home runs hit in a lifetime, surpassing Babe Ruth's record, and likely to stand until at least 2007 with Barry Bonds, and then Alex Rodriguez
perhaps 12 to 14 years later; I decline to make my analysis more complex, as I assert that I measure myself as being on the manifold rather near the global maximum of my performance in the zero-sum
game between pitcher and batter, and believe that I would do worse if I changed my eigenvector in the predicted fast-ball/ curve-ball/ slider probability distribution; while you are destined to be
known in 2007 primarily through a cryptic ad for Aflac, hence do not divert me from my optimal allocation of resorces].
The branch-and-bound Decision Analysis corollary by Yogi Berra, at another time:
"When you come to a fork in the road, take it!"
Baseball. America's Game. The America of John Forbes
Nash, Jr., anyway.
-- Prof. Jonathan Vos Post
|
{"url":"https://www.scienceblogs.com/mixingmemory/2007/06/30/the-cognitive-psychology-of-ba","timestamp":"2024-11-04T02:24:41Z","content_type":"text/html","content_length":"44902","record_id":"<urn:uuid:167b7dd2-6e40-447e-91e3-caa7e5d7d7fb>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00195.warc.gz"}
|
Polynomial is in ideal of a coordinate ring
Polynomial is in ideal of a coordinate ring
I am getting an error with the code below. Please advise how I can do this. Thanks
sage: K.<x> = QQ[]
sage: _.<y> = K[]
sage: K.<y> = K.extension(y^2 - x^3 - x)
sage: I = Ideal(x, y)
sage: I
Ideal (x, y) of Univariate Quotient Polynomial Ring in y over Univariate Polynomial Ring in x over Rational Field with modulus y^2 - x^3 - x
sage: x - y^2 + x^3 in I
1 Answer
Sort by ยป oldest newest most voted
This works:
sage: K.<x, y> = QQ[]
sage: I = Ideal(y^2 - x^3 - x)
sage: L.<X, Y> = K.quotient(I)
sage: I = Ideal(X, Y)
sage: I
Ideal (X, Y) of Quotient of Multivariate Polynomial Ring in x, y over Rational Field by the ideal (-x^3 + y^2 - x)
sage: X - Y^2 + X^3 in I
edit flag offensive delete link more
|
{"url":"https://ask.sagemath.org/question/63772/polynomial-is-in-ideal-of-a-coordinate-ring/","timestamp":"2024-11-09T13:14:32Z","content_type":"application/xhtml+xml","content_length":"49632","record_id":"<urn:uuid:f245fb61-794e-4a26-9ea4-9255c5effc6b>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00089.warc.gz"}
|
scipy.signal.cheb1ord(wp, ws, gpass, gstop, analog=False, fs=None)[source]#
Chebyshev type I filter order selection.
Return the order of the lowest order digital or analog Chebyshev Type I filter that loses no more than gpass dB in the passband and has at least gstop dB attenuation in the stopband.
wp, wsfloat
Passband and stopband edge frequencies.
For digital filters, these are in the same units as fs. By default, fs is 2 half-cycles/sample, so these are normalized from 0 to 1, where 1 is the Nyquist frequency. (wp and ws are thus
in half-cycles / sample.) For example:
■ Lowpass: wp = 0.2, ws = 0.3
■ Highpass: wp = 0.3, ws = 0.2
■ Bandpass: wp = [0.2, 0.5], ws = [0.1, 0.6]
■ Bandstop: wp = [0.1, 0.6], ws = [0.2, 0.5]
For analog filters, wp and ws are angular frequencies (e.g., rad/s).
The maximum loss in the passband (dB).
The minimum attenuation in the stopband (dB).
analogbool, optional
When True, return an analog filter, otherwise a digital filter is returned.
fsfloat, optional
The sampling frequency of the digital system.
The lowest order for a Chebyshev type I filter that meets specs.
wnndarray or float
The Chebyshev natural frequency (the “3dB frequency”) for use with cheby1 to give filter results. If fs is specified, this is in the same units, and fs must also be passed to cheby1.
See also
Filter design using order and critical points
Find order and critical points from passband and stopband spec
General filter design using order and critical frequencies
General filter design using passband and stopband spec
Design a digital lowpass filter such that the passband is within 3 dB up to 0.2*(fs/2), while rejecting at least -40 dB above 0.3*(fs/2). Plot its frequency response, showing the passband and
stopband constraints in gray.
>>> from scipy import signal
>>> import matplotlib.pyplot as plt
>>> import numpy as np
>>> N, Wn = signal.cheb1ord(0.2, 0.3, 3, 40)
>>> b, a = signal.cheby1(N, 3, Wn, 'low')
>>> w, h = signal.freqz(b, a)
>>> plt.semilogx(w / np.pi, 20 * np.log10(abs(h)))
>>> plt.title('Chebyshev I lowpass filter fit to constraints')
>>> plt.xlabel('Normalized frequency')
>>> plt.ylabel('Amplitude [dB]')
>>> plt.grid(which='both', axis='both')
>>> plt.fill([.01, 0.2, 0.2, .01], [-3, -3, -99, -99], '0.9', lw=0) # stop
>>> plt.fill([0.3, 0.3, 2, 2], [ 9, -40, -40, 9], '0.9', lw=0) # pass
>>> plt.axis([0.08, 1, -60, 3])
>>> plt.show()
|
{"url":"https://scipy.github.io/devdocs/reference/generated/scipy.signal.cheb1ord.html","timestamp":"2024-11-06T17:01:41Z","content_type":"text/html","content_length":"34727","record_id":"<urn:uuid:190efcdb-498d-4211-8d2a-536794abfec4>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00642.warc.gz"}
|
Weighted Averages in Survey Analysis
••• Stockbyte/Retrofile/Getty Images
An average is found when a group of factors are added together and then divided by the total number of factors. This way of finding averages is not necessarily applicable to averaging results of a
survey. To present survey data using weighted averages may be the best way to convey the information.
What is a Weighted Average?
A weighted average is an average of factors when certain factors count more than others or are of varying degrees of importance. Weighted averages are often found in regards to assigning grades in
school. Scores of exams may carry more weight that homework completion. Projects may count more than attendance or participation. All of these factors are combined to create a final grade for a
student but each component of the final grade is not worth the same amount.
Weighted Averages and Surveys
When conducting a survey, you are asking a varied group of respondents the same question. If every respondent is counted individually and has the same importance, you could take a simple average to
find the survey result. If you are surveying groups of various numbers of people, each group will not be counted equally or else the results will be skewed. In this case, you would assign various
weights to the responses to keep the survey results as accurate as possible.
Why is a Weighted Average Important?
Let's suppose that you have broken a group of survey respondents into two smaller groups, groups A and B, and that group A has 10 more people in it than group B. If you were to average the answers
together without weighting them, group B's answers would seemingly count more as there are fewer people to answer the question. In order to evenly distribute the answers, you must add weight to group
A's answers. This will ensure your survey responses are more accurate.
How to Find a Weighted Average
In order to accurately distribute the responses of group A and B, you will need to find the weighted average. To do so, calculate the average response for group A and for group B. Multiply the number
of respondents in group A by the average response of group A. Multiply the number of respondents in group B by the average response of group B. Add these two together and divide the total number of
respondents from group A and B. This will weight the survey and allow you to analyze the data accurately.
Photo Credits
Stockbyte/Retrofile/Getty Images
|
{"url":"https://sciencing.com/weighted-averages-survey-analysis-8633297.html","timestamp":"2024-11-03T10:08:56Z","content_type":"text/html","content_length":"403968","record_id":"<urn:uuid:afac17df-8543-448a-a68b-5b50de7d9bba>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00473.warc.gz"}
|
Assessment of the Wavelet Transform in Reduction of Noise from Simulated PET Images
An efficient method for tomographic imaging in nuclear medicine is PET. Higher sensitivity, higher spatial resolution, and more accurate quantification are advantages of PET, in comparison to SPECT.
However, a high noise level in the images limits the diagnostic utility of PET. Noise removal in nuclear medicine is traditionally based on the Fourier decomposition of the images. This method is
based on frequency components, irrespective of the spatial location of the noise or signal. The wavelet transform presents a solution by providing information on frequency contents while retaining
spatial information, alleviating the shortcoming of Fourier transformation. Thus, wavelet transformation has been extensively used for noise reduction, edge detection, and compression. Methods: In
this research, SimSET software was used for simulation of PET images of the nonuniform rational B-spline–based cardiac-torso phantom. The images were acquired using 250 million counts in 128 × 128
matrices. For a reference image, we acquired an image with high counts (6 billion). Then, we reconstructed these images using our own software developed in a commercially available program. After
image reconstruction, a 250-million-count image (noisy image or test image) and a reference image were normalized, and then root mean square error was used to compare the images. Next, we wrote and
applied denoising programs. These programs were based on using 54 different wavelets and 4 methods. Denoised images were compared with the reference image using root mean square error. Results: Our
results indicate stationary wavelet transformation and global thresholding are more efficient at noise reduction than are other methods that we investigated. Conclusion: Wavelet transformation is a
useful method for denoising simulated PET images. Noise reduction using this transform and loss of high-frequency information are simultaneous with each other. It seems we should attend to mutual
agreement between noise reduction and visual quality of the image.
PET is an efficient technique to determine 3-dimensional distributions of radiotracers in a patient's body. The technique is used to map the biologic function and metabolic changes of the organs
under investigation. PET has a good sensitivity and specificity in diagnosis and differentiation of malignant from benign tumors (1,2).
Although PET has made crucial progress, it nevertheless bears the main weakness of nuclear medicine: poor count density in images. No matter which organ is imaged, noise is always present in the
nuclear medicine images and always causes error in quantification. Signal-to-noise ratio, although considerably higher in PET than in SPECT, is yet much lower than in other tomography techniques such
as CT and MRI. The inherent noise of the PET images considerably increases if improvement techniques such as scatter or randoms correction are applied (3).
Conventionally, finite impulse response filters are used to improve the signal-to-noise ratio of nuclear medicine data (3). These filters are mainly of the low-pass and resolution recovery (adaptive)
types. Butterworth and gaussian filters are common low-pass filters, and Metz and Wiener filters are the main resolution recovery filters (4–6).
Noise removal in nuclear medicine is traditionally based on Fourier decomposition of images. However, this method has 2 major drawbacks. The Fourier transform decomposes the data into a series of
sine and cosine functions, representing the frequency components of the data. Because the sinusoidal functions are periodic and of infinite length, the object domain information (spatial information,
in the case of an image) is ignored. Therefore, noise removal based on the Fourier transform affects exclusively the frequency components, irrespective of the spatial location of the noise or signal.
The result is uniform distortion of the image, regardless of the local signal-to-noise ratio. Moreover, this type of decomposition requires the frequency of the data to be uniform over the entire
image, which is not always the case in nuclear medicine data (7).
An alternative method is the wavelet transform, which successfully overcomes these shortcomings. The wavelet transform provides a time-frequency representation of the signals. The wavelet was
developed as an alternative to the short-time Fourier transform because high frequencies are better resolved in time and low frequencies are better resolved in the frequency domain (8,9).
During the last decade, wavelet transformation has gradually been replacing the conventional Fourier transform in many applications. Wavelet transformation has several advantages over the
conventional Fourier transform that are potentially advantageous in nuclear medicine. The main advantage is flexibility in the selection of base functions for decomposing the data. Although the
Fourier transform uses just the sine and cosine functions, wavelet transforms may use an infinite set of possible base functions. Thus, wavelet analysis provides access to information that may be
obscured by Fourier analysis. Another advantage of the wavelet is localization of the base functions in space (8,9). Although the sine and cosine functions have infinite length, the wavelets are
quite short, creating an opportunity for analyzing signals that contain discontinuities and sharp spikes—a common situation in nuclear medicine images. Moreover, wavelet transformation also brings
the possibility of extracting edge information, which provides essential visual cues in the clinical interpretation of images.
The wavelet has the ability to approximate an image with just a few coefficients independent of the original image resolution and, thus, makes possible the comparison of images of different
These excellent features make the wavelet transform an exceptionally powerful tool to detect and encode important elements of an image in terms of wavelet coefficients. Adaptive thresholding of the
coefficients corresponding to undesired components removes the noise from the image much more efficiently than through the conventional Fourier method. Wavelet denoising is now an accepted and widely
used method of noise reduction (8,10,11) but has not yet received considerable attention in nuclear medicine. Regardless of some reports in favor of wavelet denoising (12), it is nevertheless on the
waiting list of nuclear medicine.
The nonuniform rational B-spline–based cardiac-torso phantom was used to generate a torso of a typical human as the virtual object to be imaged (13,14). Adjustments of activity distribution in the
phantom were based on ^18F-FDG uptake in the organs of a healthy human. Adjustments of attenuation coefficients for the tissues in the phantom were based on the Zubal phantom (15,16) attenuation
coefficients. The phantom was constructed in a 256 × 256 × 256 matrix corresponding to 1.6 mm^3 voxels. To fully resemble a human torso, normal cardiac and respiratory motion was also considered in
the creation of the phantom.
The SimSET PET simulator, version 2.6.2.6, was used to simulate a Discovery LS PET scanner (GE Healthcare). The SimSET package uses Monte Carlo techniques to model physical processes (17). Validation
of the PET Monte Carlo simulator has been done already (18).
We simulated Monte Carlo 2-dimensional PET using ^18F-FDG as the radiopharmaceutical. The energy range according to 511 keV ± 30% was adjusted on 350–650 keV (the standard window) (19).
The imaging system was adjusted to fully cover the cardiac and liver regions, as these organs are the main subjects of the PET imaging. The total number of slices after reconstruction was 20.
Intentionally, the simple backprojection technique was used to reconstruct the images in crude format without any manipulation.
The simulation first generated a history of 6 billion photons (6,000 million photons) to create the noise-free data to be used as a reference image. Then, the simulation was repeated generating 250,
500, 1,000, 1,500, 2,000, 2,500, and 3,000 million photons as test images of different signal-to-noise ratios.
Images were simulated with the phantom representing a person under normal conditions: the lengths of the beating heart cycle and of the respiratory cycle were adjusted to 1 and 5 s, respectively.
Maximum diaphragm motion in normal breathing was specified as 2 cm. The phantom length was 40 cm (−20 to +20), and we adjusted scan range to 14 cm (−2 to +12) to display the entire heart and liver.
The number of phantom slices was specified as 256, and the number of image slices after reconstruction was specified as 20. We acquired 2 images: one with 6 billion counts as a reference image and
another with 250 million counts as a noisy image.
MATLAB software (The MathWorks) was used for programming. In the first step, we reconstructed simulated PET images using our programs. Before denoising, reference and noisy images were compared using
the line profile and the root mean squared error (RMSE). Using the line profile, we displayed pixel values in the specific row of the image for both the reference and the noisy images. This line
profile can show noise in the image approximately. We plot pixel indices along the x-axis and pixel values along the y-axis. RMSE can indicate the difference between the images. The range of RMSE is
from zero to infinity. The best value, zero, is achieved if the 2 images are quite similar. We calculated the RMSE value between the reference and noisy images and after denoising between the
reference and denoised images.
Images were denoised by 4 different methods of wavelet denoising: single-level discrete wavelet, which transforms in 2 dimensions (DWT); single-level discrete stationary wavelet, which transforms in
2 dimensions (SWT); global thresholding, which uses a positive real number for uniform threshold; and level-dependent thresholding, which thresholds according to the decomposition level in 2
methods—hard thresholding and soft thresholding. In all methods, the noisy image was decomposed to the approximation, horizontal detail, vertical detail, and diagonal detail. Then, the image was
reconstructed using the techniques explained in the “Results” section.
Fifty-four wavelets (Haar [1 wavelet], Daubechies [9 wavelets], Symlets [8 wavelets], Coiflets [5 wavelets], BiorSplines [15 wavelets], ReverseBior [15 wavelets], and DMeyer [1 wavelet]) were used in
each of the denoising methods. The optimum wavelet in each method was determined in terms of minimum RMSE between the denoised test images and the reference image. All calculations were performed
using software developed in MATLAB, version 7.1.
The test images were decomposed using wavelet transformation into the approximation, horizontal detail, vertical detail, and diagonal detail. Then images were reconstructed again using different
combinations of the approximation and details or using different thresholding methods.
We compared the noisy and denoised images globally (RMSE) and locally using profiles. These are the methods conventionally used in nuclear medicine. There are methods that are more complicated but
are not suitable for nuclear medicine. Identical profiles were drawn on the images. In simulated images, the reproducibility is high. Subtraction of 2 profiles may be helpful but is not necessary.
In the first method (Table 1; Fig. 1), DWT was used for denoising, and then we reconstructed images using 4 procedures: only with approximation, with approximation and horizontal detail, with
approximation and vertical detail, and with approximation and diagonal detail. We would lose important information in the reconstructed image if approximation were eliminated. To show the change, we
calculated relative RMSE (RMSE between the reference and denoised images divided by the RMSE between the reference and noisy images). Under the best conditions, the relative RMSE is about 0.31 when
we used solely approximation. Adding vertical or horizontal detail to the approximation gave a relative RMSE of about 0.58, showing no significant differences between the choices.
Figure 1 showed that image quality improved after denoising, but as the line profiles indicate, noise reduction was more significant when only approximation was used as the reconstruction procedure.
The best wavelets in this method of denoising are Daubechies.
Then, the image was reconstructed using other procedures: with approximation, horizontal detail, and vertical detail; with approximation, horizontal detail, and diagonal detail; and with
approximation, vertical detail, and diagonal detail. Afterward, we compared these reconstructed images with the image that was reconstructed using only approximation. The result was similar to that
of the previous procedure.
In the second method (Table 2; Fig. 2), SWT was used for denoising. We did decomposition to 3 levels and we used only approximation for reconstruction, because the RMSE value increased when we added
horizontal, vertical, or diagonal detail to the approximation. The minimum RMSE values were observed when only approximation was used for reconstruction. At levels 2 and 3 of decomposition, relative
RMSE values were about 0.09 and 0.12, respectively, and the difference between them was not statistically significant.
Figure 2 indicates that at levels 2 and 3 of decomposition, noise reduction is more considerable than at level 1. Level 2 shows the smallest relative RMSE and is most appropriate. At level 1, the
best wavelet in this method of denoising is Haar.
In the third method (Table 3; Fig. 3), a global threshold was used for denoising. In this method, a positive real number is used for uniform threshold. The same threshold is applied at all levels for
all subimages. We compared the results at 3 levels of decomposition. Our results show that noise reduction is more significant at levels 2 and 3 of decomposition than at level 1. However, their
difference is not statistically significant. The best wavelets in this method of denoising are Daubechies.
In the fourth method (Table 4; Fig. 4), a level-dependent threshold was used for denoising. This method uses different thresholds for each transformation level. The threshold is obtained using a
wavelet coefficients selection rule based on the strategy of Birge et al. (20). We compared the results of hard and soft thresholding at 1 level of decomposition. Our results showed that
level-dependent soft thresholding more efficiently reduces noise than does hard thresholding.
Moreover, four methods of level-dependent soft thresholding at 1 level of decomposition were compared. (These 4 methods are different from the 4 methods of denoising.) In these methods, the threshold
values and numbers of coefficients were determined on the basis of the α-parameter (21). Threshold values and numbers of coefficients to be kept for denoising correspond to the α-parameter. The
MATLAB default value for α equals 3. In other procedures, penalized threshold values were used. In the penalhi procedure, a high value of α is used (2.5 ≤ α < 10). In the penalme and penallo
procedures, respectively, a medium value (1.5 < α < 2.5) and a low value (1 < α < 2) of α are used. The difference between the default procedure and other procedures was statistically significant,
but the results of the penalhi, penalme, and penallo procedures were similar. The best wavelet in both methods of denoising (hard and soft) is Haar.
In this study, we compared 4 different methods for wavelet denoising in PET images. All these methods were effective for denoising, but their results were different.
The results of DWT indicated that this method improves the RMSE value. However, the best result was obtained by using only approximation. In nuclear medicine images, high-frequency noise exists. When
we eliminate details that contain high-frequency information, noise reduces significantly. Adding one of the details to the approximation degrades RMSE, and this degradation is more pronounced for
horizontal and vertical details.
Classic DWT is not shift-invariant: the DWT of a translated version of a signal is not the same as the DWT of the original signal (11). SWTs, or undecimated algorithms, apply the low-pass and
high-pass filters without any decimation. The SWT is similar to the DWT except the signal is never subsampled and instead the filters are up-sampled at each level of decomposition. The SWT is an
inherently redundant scheme, as each set of coefficients contains the same number of samples as the input; therefore, for a decomposition of n levels there is a redundancy of 2n. Therefore, SWT is a
time-consuming method and needs more space in the computer for processing but is more accurate than DWT because it is a shift-invariant method.
DWT looks much the same as the undecimated DWT (or SWT) except for down-sampling by 2 and up-sampling by 2. The down-sampling by 2 is often referred to as decimation by 2. With the down-sampling, the
detail coefficients and approximation coefficients are each about half the length of the original signal.
We have to be careful using the DWT instead of the SWT for 2 main reasons: First, down-sampling by 2 in the DWT can produce aliasing (throwing away half the samples can lead to false signals).
Second, this transform is not shift-invariant (sometimes called time-invariant) (22,23).
In SWT, we studied the results of a reconstruction procedure that uses only approximation because when we add one detail to approximation for reconstruction, RMSE increases and image quality
degrades. The degradation can be due to intense distortion, which happens after decomposition. In this method, the best result for RMSE is provided by the procedure that uses only approximation and
the third level of decomposition. However, the visual quality of the image degrades because at the third level, in comparison to previous levels, more information is omitted.
It seems that in the reconstruction procedure using only approximation, the SWT is more effective than the DWT for noise reduction. At the first level of decomposition using SWT, relative RMSE is
about 0.196 and is comparable to the best state of DWT.
In global thresholding, the best result for RMSE is obtained by thresholding at levels 2 and 3 of decomposition. However, the visual quality of the image is better at level 2. At level 3, noise is
reduced but important edge information is also lost from the image.
Our results indicate that the global thresholding method is more effective than the level-dependent thresholding method of denoising. Using level-dependent thresholding, the relative RMSE is about
0.31 at best and is comparable to the relative RMSE in global thresholding at the third level. Therefore, uniform thresholding at all levels of decomposition is more efficient in denoising.
Of the methods that we examined on simulated PET images, the best denoising results were generated by SWT using only approximation at all 3 levels and global thresholding at levels 2 and 3. However,
the reduction in noise using wavelet transformation was accompanied by a loss of high-frequency information. We should attend to mutual agreement between noise reduction and visual quality of the
image. Therefore, SWT at level 1 has better results than the other methods when approximation is used for reconstruction.
Applying the methods to real patient data does not add much to the paper. Clinical application of the results will be useful but difficult and is not possible for us at this time.
Wavelet transformation is a useful method to reduce noise in PET images and therefore enhance the images. The characteristic of time-frequency localization of the wavelet helps us to shift and scale
the signals and keep important information for analysis after transformation. Our results show that the resolution and contrast of images are almost not influenced by the wavelet denoising methods
but that much noise is removed and image quality is improved.
• COPYRIGHT © 2009 by the Society of Nuclear Medicine, Inc.
1. 1.↵
2. 2.↵
3. 3.↵
4. 4.↵
5. 5.
6. 6.↵
7. 7.↵
8. 8.↵
9. 9.↵
10. 10.↵
11. 11.↵
12. 12.↵
13. 13.↵
14. 14.↵
15. 15.↵
16. 16.↵
17. 17.↵
18. 18.↵
19. 19.↵
20. 20.↵
21. 21.↵
22. 22.↵
23. 23.↵
|
{"url":"https://tech.snmjournals.org/content/37/4/223","timestamp":"2024-11-07T16:28:52Z","content_type":"application/xhtml+xml","content_length":"235239","record_id":"<urn:uuid:9e07dd59-b5c3-477c-b056-3c0499f2589f>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00426.warc.gz"}
|
3-D Human Pose Tracking
Representing articulated objects as a graphical model has gained much popularity in recent years, often the root node of the graph describes the global position and orientation of the object. In this
work, we present a method to robustly track 3-D human pose by permitting larger uncertainty to be modeled over the root node than existing techniques allow. Significantly, this is achieved without
increasing the uncertainty of remaining parts of the model. The benefit is that a greater volume of the posterior can be supported making the approach less vulnerable to tracking failure. Given a
hypothesis of the root node state a novel method is presented to estimate the posterior over the remaining parts of the body conditioned on this value. All probability distributions are approximated
using a single Gaussian allowing inference to be carried out in closed form. A set of deterministically selected sample points are used that allow the posterior to be updated for each part requiring
only seven image likelihood evaluations making it extremely efficient. Multiple root node states are supported and propagated using standard sampling techniques. We believe this to be the first work
devoted to efficient tracking of human pose whilst modeling large uncertainty in the root node. The proposed method is more robust to tracking failures than existing approaches.
Proposed Method
1. To perform efficient tracking the body is decomposed into its constituent parts which allows it to be represented over a probabilistic graph. The nodes are partitioned into the root node,
representing the global position and orientation of the body, and the remaining nodes representing the orientation of each part.
The graphical structure used to represent the body, comprising of the head (H), torso (Tor), left upper arm (LUA), left lower arm (LLA), left upper leg (LUL), left lower leg (LLL) and the opposing
part for each limb.
2. The state of each node, excluding the root node, is represented as a quaternion rotation that describes the orientation of each part in the frame of reference of the body, where the base of the
torso is the origin, the z-axis is the vertical and y-axis is directed across the shoulders. A distribution over quaternions is then approximated.
3. The posterior distribution over the root node is represented by a set of samples. For each sample, a set of Gaussians are used to represent the posterior for each part conditioned on the given
root node state. The parameters of each distribution are updated in each frame using a set of deterministically selected sample points.
An example of a set of sample points used to estimate observational likelihood distributions projected into two views. They represent the distributions shown on the left.
4. Combining these with limb conditionals, that represent the prior distribution over the configuration between connected parts, efficient probabilistic inference can be performed.
5. Whilst the posterior distribution over the root node is propagated through time stochastically, the distribution over all other nodes are propagated by inflating the covariances deterministically.
Example frames showing the distribution of the samples using the SIR-PF (top) and the proposed method (bottom).
Example results - pose estimation errors measured in mm using 3 cameras
Method S1 S2 S3 Average
APF 194.2 75.0 87.7 118.9 +/- 65.5
SIR-PF 105.1 93.0 109.2 102.5 +/- 8.4
Proposed method 87.3 95.2 98.5 93.7 +/- 5.8
Example results - tracking errors measured in mm using 3 cameras
Tracking error in each frame for the APF (blue) and the proposed method (red). Tracking errors for walking using each method applied to a different frame rate.
Example results - pose estimation errors measured in mm using 2 cameras
Method S1 S2 S3 Average
APF 200.7 120.0 117.9 146.2 +/- 47.2
SIR-PF 105.1 105.2 120.7 110.4 +/- 8.9
Proposed method 89.3 108.7 113.5 103.8 +/- 12.8
Example results - the MAP 3-D pose using the proposed method
Example frames showing the MAP 3-D pose projected into each camera view.
Example results - pose estimation results
Comparison of pose estimation between the SIR-PF (top) and proposed method (bottom).
|
{"url":"http://csvision.swan.ac.uk/index.php?n=Site.3-DHumanPoseTracking","timestamp":"2024-11-11T16:10:53Z","content_type":"application/xhtml+xml","content_length":"16167","record_id":"<urn:uuid:c19a38aa-7988-42ee-8142-0e7f4bb80453>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00083.warc.gz"}
|
Scheduling a card tournament with 4 players
My friend Brad had a really interesting real world optimization problem. He wants to have a card tournament where either 16, 28, or 40 players play. There are 4 players in each hand, and they play a
number of rounds. The trick is to schedule the tournament so that each player plays all of the other players exactly once. Belive it or not, he actually had a party like this for 16 players, and he
wanted to try 28 or 40. To work out evenly with n players, you need to have both n/4, and (n-1)/3 be integers. For example, it works for the folowing combinations of players and hands played:
(Players, hands played)
Given that I'm the "optimization guy", I'm supposed to be able to solve problems like this. I tried soving it with mixed integer programming, but it has a really lousy LP relaxation. It turns out
constraint programming is a better approach. I eventually found a web page by Warick Harvey at IC-Parc at Imperial College that gives the best known solutions for these problems. He calls it the
"social golfer" problem, or "Kirkman's schoolgirl problem". (Imperial is great school, by the way. My advisor, and lots of other cool folks went to Imperial.)
I'm just posting about the problem, because it was really hard to find the page in Google, given that I wasn't searching on "Golf". If you're scheduling any tournament with more than 2 participants
in each round, you're in luck.
|
{"url":"https://gorithm.blogs.com/gorithm/2005/05/scheduling_a_ca.html","timestamp":"2024-11-05T09:12:58Z","content_type":"application/xhtml+xml","content_length":"25551","record_id":"<urn:uuid:fcb930c6-4974-4a8b-b365-5ba79c98b994>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00396.warc.gz"}
|
callPeaksMultivariate: Fit a Hidden Markov Model to multiple ChIP-seq samples in chromstaR: Combinatorial and Differential Chromatin State Analysis for ChIP-Seq Data
Fit a HMM to multiple ChIP-seq samples to determine the combinatorial state of genomic regions. Input is a list of uniHMMs generated by callPeaksUnivariate.
1 callPeaksMultivariate(
2 hmms,
3 use.states,
4 max.states = NULL,
5 per.chrom = TRUE,
6 chromosomes = NULL,
7 eps = 0.01,
8 keep.posteriors = FALSE,
9 num.threads = 1,
10 max.time = NULL,
11 max.iter = NULL,
12 keep.densities = FALSE,
13 verbosity = 1,
14 temp.savedir = NULL
15 )
callPeaksMultivariate( hmms, use.states, max.states = NULL, per.chrom = TRUE, chromosomes = NULL, eps = 0.01, keep.posteriors = FALSE, num.threads = 1, max.time = NULL, max.iter = NULL,
keep.densities = FALSE, verbosity = 1, temp.savedir = NULL )
hmms A list of uniHMMs generated by callPeaksUnivariate, e.g. list(hmm1,hmm2,...) or a vector of files that contain such objects, e.g. c("file1","file2",...).
use.states A data.frame with combinatorial states which are used in the multivariate HMM, generated by function stateBrewer. If both use.states and max.states are NULL, the maximum possible
number of combinatorial states will be used.
max.states Maximum number of combinatorial states to use in the multivariate HMM. The states are ordered by occurrence as determined from the combination of univariate state calls.
per.chrom If per.chrom=TRUE chromosomes will be treated separately. This tremendously speeds up the calculation but results might be noisier as compared to per.chrom=FALSE, where all
chromosomes are concatenated for the HMM.
chromosomes A vector specifying the chromosomes to use from the models in hmms. The default (NULL) uses all available chromosomes.
eps Convergence threshold for the Baum-Welch algorithm.
keep.posteriors If set to TRUE, posteriors will be available in the output. This can be useful to change the posterior cutoff later, but increases the necessary disk space to store the result
num.threads Number of threads to use. Setting this to >1 may give increased performance.
max.time The maximum running time in seconds for the Baum-Welch algorithm. If this time is reached, the Baum-Welch will terminate after the current iteration finishes. The default NULL is no
max.iter The maximum number of iterations for the Baum-Welch algorithm. The default NULL is no limit.
keep.densities If set to TRUE (default=FALSE), densities will be available in the output. This should only be needed debugging.
verbosity Verbosity level for the fitting procedure. 0 - No output, 1 - Iterations are printed.
temp.savedir A directory where to store intermediate results if per.chrom=TRUE.
A list of uniHMMs generated by callPeaksUnivariate, e.g. list(hmm1,hmm2,...) or a vector of files that contain such objects, e.g. c("file1","file2",...).
A data.frame with combinatorial states which are used in the multivariate HMM, generated by function stateBrewer. If both use.states and max.states are NULL, the maximum possible number of
combinatorial states will be used.
Maximum number of combinatorial states to use in the multivariate HMM. The states are ordered by occurrence as determined from the combination of univariate state calls.
If per.chrom=TRUE chromosomes will be treated separately. This tremendously speeds up the calculation but results might be noisier as compared to per.chrom=FALSE, where all chromosomes are
concatenated for the HMM.
A vector specifying the chromosomes to use from the models in hmms. The default (NULL) uses all available chromosomes.
If set to TRUE, posteriors will be available in the output. This can be useful to change the posterior cutoff later, but increases the necessary disk space to store the result immensely.
Number of threads to use. Setting this to >1 may give increased performance.
The maximum running time in seconds for the Baum-Welch algorithm. If this time is reached, the Baum-Welch will terminate after the current iteration finishes. The default NULL is no limit.
The maximum number of iterations for the Baum-Welch algorithm. The default NULL is no limit.
If set to TRUE (default=FALSE), densities will be available in the output. This should only be needed debugging.
Verbosity level for the fitting procedure. 0 - No output, 1 - Iterations are printed.
Emission distributions from the univariate HMMs are used with a Gaussian copula to generate a multivariate emission distribution for each combinatorial state. This multivariate distribution is then
kept fixed and the transition probabilities are fitted with a Baum-Welch. Please refer to our manuscript at http://dx.doi.org/10.1101/038612 for a detailed description of the method.
1 # Get example BAM files for 2 different marks in hypertensive rat
2 file.path <- system.file("extdata","euratrans", package='chromstaRData')
3 files <- list.files(file.path, full.names=TRUE, pattern='SHR.*bam$')[c(1:2,6)]
4 # Construct experiment structure
5 exp <- data.frame(file=files, mark=c("H3K27me3","H3K27me3","H3K4me3"),
6 condition=rep("SHR",3), replicate=c(1:2,1), pairedEndReads=FALSE,
7 controlFiles=NA)
8 states <- stateBrewer(exp, mode='combinatorial')
9 # Bin the data
10 data(rn4_chrominfo)
11 binned.data <- list()
12 for (file in files) {
13 binned.data[[basename(file)]] <- binReads(file, binsizes=1000, stepsizes=500,
14 experiment.table=exp,
15 assembly=rn4_chrominfo, chromosomes='chr12')
16 }
17 # Obtain the univariate fits
18 models <- list()
19 for (i1 in 1:length(binned.data)) {
20 models[[i1]] <- callPeaksUnivariate(binned.data[[i1]], max.time=60, eps=1)
21 }
22 # Call multivariate peaks
23 multimodel <- callPeaksMultivariate(models, use.states=states, eps=1, max.time=60)
24 # Check some plots
25 heatmapTransitionProbs(multimodel)
26 heatmapCountCorrelation(multimodel)
# Get example BAM files for 2 different marks in hypertensive rat file.path <- system.file("extdata","euratrans", package='chromstaRData') files <- list.files(file.path, full.names=TRUE, pattern=
'SHR.*bam$')[c(1:2,6)] # Construct experiment structure exp <- data.frame(file=files, mark=c("H3K27me3","H3K27me3","H3K4me3"), condition=rep("SHR",3), replicate=c(1:2,1), pairedEndReads=FALSE,
controlFiles=NA) states <- stateBrewer(exp, mode='combinatorial') # Bin the data data(rn4_chrominfo) binned.data <- list() for (file in files) { binned.data[[basename(file)]] <- binReads(file,
binsizes=1000, stepsizes=500, experiment.table=exp, assembly=rn4_chrominfo, chromosomes='chr12') } # Obtain the univariate fits models <- list() for (i1 in 1:length(binned.data)) { models[[i1]] <-
callPeaksUnivariate(binned.data[[i1]], max.time=60, eps=1) } # Call multivariate peaks multimodel <- callPeaksMultivariate(models, use.states=states, eps=1, max.time=60) # Check some plots
heatmapTransitionProbs(multimodel) heatmapCountCorrelation(multimodel)
For more information on customizing the embed code, read Embedding Snippets.
|
{"url":"https://rdrr.io/bioc/chromstaR/man/callPeaksMultivariate.html","timestamp":"2024-11-02T00:16:17Z","content_type":"text/html","content_length":"52695","record_id":"<urn:uuid:7ec697fa-7abc-4123-8244-4c1bc1f01618>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00506.warc.gz"}
|
Problem Book
Problem Book, 18th ed. (2007) Corrections Listed in Order of Problems
This is the page you need if you have the 18th edition, 2007. You need to make the changes listed below AND the additional changes needed to update to the 19th edition.
12/14/07 Problem 15-4 -- Answer. The answer to part C should be 1/4 X (0.42/0.91)^2. The decimal point before the 9 was accidentally omitted. 0.42/0.91= chance that a person with normal pheno will be
carrier. Chance both (normal) parents will be carriers is (0.42/0.91)^2.
12/7/07 Problems 14-9 to 14-12. These are the problems that should be on page 122. We accidentally reprinted page 121 instead. Click here for the missing page.
8/1/08 Problem 10-19, part B -- Answer. The RF (in %) = 10.2, not 10.5
7/31/08 Problem 10R-5, part B -- Answer. There is a typo in line 4 of the answer to part B. It says aB instead of ab. Yellow X red will give AB, aB, Ab, and ab (if AaBb) or AB and aB (if AaBB).
10/11/07 Problem 4-13. Reaction X should be Glucose-P + ATP → 2 glyceraldehyde-3-P + ADP. The '2' is missing.
7/31/08 Problem 8R-3, part D -- Answer. The last paragraph of the answer should read 'The two chromosomes can line up two ways....' Delete the word 'homologous' -- this is a haploid organism and none
of the chromosomes are homologous.
8/1/08 Problem 7-14. -- Answer. You need 3 lys-tRNA per ribosome, not 2, because the E site will not empty until the A site is filled. The answer in the older editions do not take the E site into
10/9/07 Problem Set 6 -- Answers: The answers to problem set #6 are missing -- the questions are repeated where the answers should be. Click here for the key to problem set 6.
|
{"url":"http://www.columbia.edu/cu/biology/courses/c2005/prob-book-fix-for18.html","timestamp":"2024-11-09T22:19:17Z","content_type":"text/html","content_length":"5087","record_id":"<urn:uuid:3ddbfae6-3167-4808-9654-374d65b54842>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00695.warc.gz"}
|
[curves] To multiply or to add in HKD schemes?
Pieter Wuille pieter.wuille at gmail.com
Thu Apr 27 10:45:26 PDT 2017
On Fri, Apr 7, 2017 at 3:08 PM, Oleg Andreev <oleganza at gmail.com> wrote:
> Hey there,
> HKD stands for Hierarchical Key Derivation, e.g. BIP32 [1] or ChainKD [2].
> Alternatively known as "blinded keys" per Tor's draft [3].
> All these schemes generate a scalar to be mixed with the parent public key
> P using an index or nonce i:
> h(i) := Hash(P || i || stuff)
> The first two schemes add a derivation factor (multiplied by the base
> point)
> to the parent pubkey, while the Tor's approach is to multiply the parent
> pubkey by the factor:
> Child(i) := P + h(i)*G // BIP32, ChainKD
> Child(i) := h(i)*P // Tor
> Last time I asked Pieter Wuille (BIP32's author) a couple years ago about
> their choice,
> his reply (if I recall correctly) was that scalar multiplication for a
> base point
> is more efficient than for an arbitrary point.
In an earlier draft of BIP32 at the time we were using multiplication as
well. At some point, the trick that allows recovering the parent key from
the parent public and child private key was brought up, and we realized
that neither multiplication nor addition could fix this. As addition was
faster and simpler, and didn't seem to have any security downsides over
multiplication, we switched BIP32 to use addition.
I think that was the right choice. I have had a few cases where it was not
obvious to people that multiplication with a scalar can be reversed
("Doesn't that need an EC division?"), while it is very clear that addition
is always reversable.
I wonder if there's a difference in functionality if we add the factor
> (a-la BIP32) or multiply (a-la Tor).
> Maybe some weird ZK schemes benefit from blinding/derivation via
> multiplication instead of addition?
If there are benefits to using the multiplication form, I'd very much like
to hear about them.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://moderncrypto.org/mail-archive/curves/attachments/20170427/85765117/attachment.html>
More information about the Curves mailing list
|
{"url":"https://moderncrypto.org/mail-archive/curves/2017/000895.html","timestamp":"2024-11-09T19:53:26Z","content_type":"text/html","content_length":"5378","record_id":"<urn:uuid:71021f3b-ad8f-4e93-9f3d-dfedfa5b01fc>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00431.warc.gz"}
|
Video - Topological aspects in the theory of aperiodic solids and tiling spaces
After a review of various types of tilings and aperiodic materials, the notion of tiling space (or Hull) will be defined. The action of the translation group makes it a dynamical system. Various
local properties, such as the notion of "Finite Local Complexity" or of "Repetitivity" will be interpreted in terms of the tiling space. The special case of quasicrystal will be described. In the
second part of the talk, various tools will be introduced to compute the topological invariants of the tiling space. First the tiling space will be seen as an inverse limit of compact branched
oriented manifolds (the Anderson-Putnam complex). Then various cohomologies will be defined on the tiling space giving rise to isomorphic cohomology groups. As applications, the "Gap Labeling
Theorem" will be described, and some results for the main quasicrystal and simple tilings will be given.
|
{"url":"https://www.math.snu.ac.kr/board/index.php?mid=video&l=ko&category=110673&listStyle=gallery&page=8&document_srl=456998","timestamp":"2024-11-06T10:41:05Z","content_type":"text/html","content_length":"51283","record_id":"<urn:uuid:1a1696a6-e457-4930-b077-abde585452d2>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00480.warc.gz"}
|
CMS Winter 2001 Meeting
I will describe some work in computational electromagnetics that arises within large scale inverse problems for geophysical prospecting. Traditional formulations and discretizations of
time-harmonic Maxwell's equations in three dimensions used in the forward-modelling leads to a large, sparse system of linear algebraic equations that is difficult to solve. That is, iterative
methods applied to the linear system are slow to converge, a major drawback in solving practical inverse problems.
Towards developing a multigrid preconditioner, I'll show a Fourier analysis based on a finite-volume discretization of a vector potential formulation of time-harmonic Maxwell's equations on a
staggered grid in three dimensions. Grid-independent bounds on the eigenvalue and singular value ranges of the system obtained using a preconditioner based on exact inversion of the dominant
diagonal blocks of the coefficient matrix can be proved. This result implies that a multigrid preconditioner that uses single multigrid cycles to effect inversion of the diagonal blocks also
yields a preconditioned system with l[2]-condition number bounded independent of the grid size. Numerical experiments show that the somewhat restrictive assumptions of the Fourier analysis do not
prohibit it from describing the essential local behavior of the preconditioned operator under consideration. A very efficient, practical solver is obtained.
This is joint work with U. Ascher.
The Vector Integration to Endpoint (VITE) circuit describes a real time neural network model which simulates behavioral and neurobiological properties of planned arm movements by the interaction
of two populations of neurons. We generalize this model to include a delay between the interacting populations and give conditions on how its presence affects the accuracy of planned movements.
We also show the existence of a nonzero critical value for the delay where a transition between accurate movement and target overshoot occurs. This critical value of the delay depends on the
movement speed and becomes arbitrarily large for sufficiently slow movements. Thus neurobiological or artificial systems modeled by the VITE circuit can tolerate an arbitrarily large delay if the
overall movement speed is sufficiently slow.
The one-dimensional steady-state heat and mass transfer of a two-phase zone in a water-saturated porous medium is studied. The system consists of a sand-water-vapour mixture in a tube that is
heated from above and cooled from below. Under certain conditions, a two-phase zone of both vapour and water exists in the middle of the tube. A model problem for the temperature and the
saturation profiles within this two-phase zone is formulated by allowing for an explicit temperature dependence for the saturation vapour pressure together with an explicit saturation dependence
for the capillary pressure. A boundary layer analysis is performed on this model in the asymptotic limit of a large vapour pressure gradient. This asymptotic limit is similar to the large
activation energy limit commonly used in combustion problems. In this limit, and away from any boundary layers, an uncoupled outer problem for the saturation and the temperature is obtained. From
this outer problem it is shown that the temperature profile is slowly varying and that the outer saturation profile agrees very well with that obtained in the previous model of Udell [J. Heat
Transfer, 105(1983)], previous model of Udell [J. Heat Transfer, 105(1983), p. 485] where strict isothermal conditions were assumed. The condensation and evaporation occuring within the boundary
layers near the edges of the two-phase zone is examined. Finally, an iterative method is described that allows both the full and outer models of the two-phase zone to be coupled to the two
single-phase zones consisting of either water or vapour. Numerical computations are performed with realistic control parameters for the entire system.
Most clustering algorithms do not work efficiently for data sets in high dimensional spaces. Due to the inherent sparsity of data points, it is not feasible to find interesting clusters in the
original full space of all dimensions, but pruning off dimensions in advance, as most feature selection procedures do, may lead to significant loss of information and thus render the clustering
results unreliable.
In a recent project with Jianhong Wu, we propose a new neural network architecture Projective Adaptive Resonance Theory (PART) in order to provide a solution to this feasibility-reliability
dilemma in clustering data sets in high dimensional spaces. The architecture is based on the well known ART developed by Carpenter and Grossberg, and a major modification (selective output
signaling) is provided in order to deal with the inherent sparsity in the full space of the data points from many data-mining applications. Unlike PROCLUS (proposed by Aggarwal et. al in 1999)
and many other clustering algorithms, PART algorithm do not require the number of clusters as input parameter, and in fact, PART algorithm will find the number of clusters. Our simulations on
high dimensional synthetic data show that PART algorithm, with a wide range of input parameters, enables us to find the correct number of clusters, the correct centers of the clusters and the
sufficiently large subsets of dimensions where clusters are formed, so that we are able to fully reproduce the original input clusters after a reassignment procedure.
We will also show that PART algorithm is based on rigorous mathematical analysis of the dynamics for PART neural network model (a large scale system of differential equations), and that in some
ideal situations which arise in many applications, PART does reproduce the original input cluster structures.
Many numerically intensive computations done in a scientific computing environment require large quantities of uniformly distributed pseudorandom numbers. Large-scale computations on parallel
processors pose additional demands, such as independent generation of pseudorandom numbers on each processor to avoid communication overhead, or coordination between the independent generators to
provide consistency during program development.
This talk shows how mathematical innovation, the fused multiply-add instruction, loop unrolling, and floating-point ``tricks'' can result in a uniprocessor speed improvement of over 50 times over
generic algorithms, while retaining bit-wise agreement with existing, proven, random number generators. The result is a multiplicative congruential random number generator with modulus 2^k, k £
52, and period k-2, that runs at a rate of 40 million uniformly distributed random numbers in the interval (0,1) per second on RS/6000 POWER2 Model 590 processors, or one number every 3 machine
cycles. In addition, the algorithms are ``embarrassingly parallel'', so that a 250-node IBM SP2 computer can generate 10 billion uniform random numbers per second.
This is joint work with Ramesh Agarwal, Fred Gustavson, Alok Kothari and Mohammad Zubair. Other pseudorandom number generators resulting from this work cover the interval (-1,1), have a full
period of 2^k, or have modulus 2^k-1. Our algorithms are used in the IBM XL Fortran and XLHPF (High Performance Fortran) RANDOM_NUMBER functions.
To solve real world problems such as distribution of heat under the hood of an automobile, engineers often use the finite element method for solving partial differential equations. Starting with
a geometrical description of an underhood compartment, the numerical solution often relies on collections of triangles called meshes. This talk emphasizes the generation good finite element
meshes. An investigation into the quality of these meshes provides opportunities to apply mathematics in constructing meshes and in evaluating the quality of constructed meshes.
In cementing an oil well a series of non-Newtonian fluids are pumped through a narrow eccentric annulus, in an effort to displace the drilling mud and achieve zonal isolation of the well. These
flows are typically laminar and the fluids involved are shear-thinning and predominantly visco-plastic. A range of interesting displacement flows result. The aim of this talk is to give an
overview of the different problems that result and outline what efforts are being undertaken to resolve them.
In this talk we consider the motion of a two-dimensional liquid drop or bubble on a solid surface exposed to a shear flow. Peskin proposed an elegant and efficient method for simulating blood
flow in the heart, which can be generalized to solve other problems with moving interfaces. Unverdi and Tryggvason calculated the rising bubble problem by using a front-tracking method based on
Peskin's method. The main challenge of numerical modelling the motion of a liquid drop on a solid surface, in addition to capturing the moving interface between the liquid and gas phases, is to
incorporate conditions at the moving contact lines. In our study of the problem, the free surface between two liquid phases is handled by a front-tracking method; the moving contact line is
modelled by a slip velocity and contact angle hysteresis is included; and the local forces are introduced at the moving contact lines based on a relationship of slip coefficient, moving contact
angle and contact line speed. Several numerical examples are also given. This is a joint work with Huaxiong Huang and Brian Wetton.
We consider models of water management in PEM fuel cells, which involve phase change and two-phase flow in porous media. The Teflonation of fuel cell electrodes creates a non-wetting porous
media, and renders low water saturations immobile. The dynamics of wetting fronts in 1 and 2D are further complicated by condensation and evaporation layers. We discuss the proper formulation of
the problem, asymtotics of the steady states, and some of the numerical difficulties in 1 and 2D resolution of the front dynamics.
In cementing an oil well it is necessary to displace mud from the annular space between the casing and the outer rock formation. In an extreme case, layers of drilling mud are left immobile,
stuck to the walls of the annular space. Recent studies (Allouche, Frigaard and Sona, 2000) have shown that these layers are non-uniform, exhibiting small amplitude long-wavelength fluctuations
in the direction of flow. Cemented annuli are generally long and thin. Assuming little azimuthal flow, a section along the annulus is approximated by a channel with wavy walls. This provides the
motivation for our study.
We consider a small long-amplitude perturbation from plane channel and the effects on a Poiseuille flow of a Bingham fluid. A naive application of lubrication-like scalings results in a leading
order velocity profile with a plug velocity at the channel centre, which varies with length along the annulus. Since the rate of strain should be zero in the plug, this leads to a contradiction
and it is clear that the analysis has broken down. This problem is resolved using a more refined analysis. It is shown that for a small perturbation, a truly unyielded plug remains at the centre
of the channel. This true plug region is connected to the sheared outer layer via a transition layer. We determine expressions for the thickness of this layer.
Finally, we develop a two-dimensional numerical solution of the flow using the augmented Lagrangian method. This method has the advantage of accurately representing unyielded regions of the flow.
A three-phase ensemble-averaged model will be discussed for the flow of water and air through a deformable porous matrix. The model predicts a separation of the flow into saturated and
unsaturated regions. A closure of the model is proposed based on an experimentally-motivated heuristic elastic law which allows large-strain nonlinear behavior to be treated in a relatively
straightforward way. The equations are applied to flow in the ``nip'' area of a roll press machine whose function is to squeeze water out of wet paper as part of the manufacturing process. By
exploiting the thin-layer limit suggested by the geometry of the nip, the problem is reduced to a nonlinear convection-diffusion equation with one free boundary. A numerical method is proposed
for determining the flow and sample simulations are presented.
Condensation is a complex phenomenon, involving phase change and transport of mass, momentum and energy. We first develop a mathematical model for the flow of a multicomponent, dry gas in a
porous medium, consisting of a coupled system of nonlinear partial differential equations. This model is then extended to include condensation and liquid water by the introduction of a simple
regularised condensation term, and by adding one more equation governing the liquid transport. Migration of water through the porous medium occurs on a much longer time scale than either
condensation or the gas motion, which makes the underlying problem extremely stiff. We will discuss the impact this has on the development of efficient numerical methods for solving the system of
Our interest in this problem arises from the study of condensation and gas transport in hydrogen fuel cells. However, very similar models arise in other applications as diverse as kiln drying of
wood, transport of contaminants in groundwater, and thermoregulation of honeybee clusters.
Sag bending is not a chronic geriatric condition but is a method used in the manufacture of windshields. A sheet of glass rests on a shaped frame and is heated from below. The glass sags under
the action of gravity. The aim is to place the heaters in a manner that will cause the glass to sag to a specified target shape. The problem which is being considered is that of an elastic plate
with variable elastic constants under the action of gravity. The heating controls the elastic constants in a known manner so that the problem becomes a control problem with the Young's modulus as
the control and a measure of the difference between the actual shape and the target shape as the objective function. There are also upper and lower bound constraints on the value of the Young's
modulus. The company has a code but it apparently does not perform well. There are many aspects of this problem both in the mathematical formulation and in the numerical schemes proposed for its
solution some of which will be discussed in the talk.
We discuss the behaviour of the dynamical angle at three phase (liquid-solid-gas) contact points for the two-dimensional steady state motion of a liquid drop. We present a result giving this
angle as a function of the droplet speed in the form of a simple algebraic expression. It is well known that near these contact points, there are singular stresses unless the problem is
regularized. The originality of the work is that it deals directly with the singularity, using only an ansatz on the interpretation of the singular integrals. This problem came to our attention
from our work modelling ``water management'' in fuel cells. Some general remarks on fuel cell design and simpler related models will be given. This is joint work with Arian Novruzi and Huaxiong
|
{"url":"https://www2.cms.math.ca/Events/winter01/abs/im.html","timestamp":"2024-11-09T19:49:27Z","content_type":"text/html","content_length":"26203","record_id":"<urn:uuid:86528aff-917a-4bd9-becd-72a14e54ca8f>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00810.warc.gz"}
|
rasterFunctionConstants | API Reference | ArcGIS Maps SDK for JavaScript 4.31 | Esri Developer
AMD: require(["esri/layers/support/rasterFunctionConstants"], (rasterFunctionConstants) => { /* code goes here */ });
ESM: import * as rasterFunctionConstants from "@arcgis/core/layers/support/rasterFunctionConstants.js";
Object: esri/layers/support/rasterFunctionConstants
Since: ArcGIS Maps SDK for JavaScript 4.26
Property Overview
Name Type Summary Object
Object Method name constants used by the band index raster functions. rasterFunctionConstants
Object The local cell statistics operations type constants. rasterFunctionConstants
Object Predefined raster color ramp name constants used for the Colormap raster function. rasterFunctionConstants
Object Predefined raster color map name constants used for the Colormap raster function. rasterFunctionConstants
Object Kernel type constants used for the Convolution raster function. rasterFunctionConstants
Object Curvature type constants used for the curvature raster function. rasterFunctionConstants
Object Hillshade type constants used for the hillshade raster function. rasterFunctionConstants
Object The local arithmetic operations types. rasterFunctionConstants
Object The local conditional operations type constants. rasterFunctionConstants
Object The local logical operations type constants. rasterFunctionConstants
Object The local trigonometric operations type constants. rasterFunctionConstants
Object The missing band action constants available for the Extract band raster function. rasterFunctionConstants
Object The NoData interpretation constants used for the Mask raster function. rasterFunctionConstants
Object Slope type constants used for the slope raster function. rasterFunctionConstants
Object Stretch type constants used for the stretch raster function. rasterFunctionConstants
Property Details
bandIndexType Objectreadonly
User defined method. When using the user defined method to define your band arithmetic algorithm, you can enter a single-line algebraic formula to create a single-band output. The supported
operators are -,+,/,*, and unary -. To identify the bands, add B or b to the beginning of the band number.
NDVI Number
The Normalized Difference Vegetation Index (NDVI) method is a standardized index allowing you to generate an image displaying greenness (relative biomass). This index takes advantage of the
contrast of the characteristics of two bands from a multispectral raster dataset—the chlorophyll pigment absorptions in the red band and the high reflectivity of plant materials in the NIR
NDVIRe Number
The Red-Edge NDVI (NDVIre) method is a vegetation index for estimating vegetation health using the red-edge band. It is especially useful for estimating crop health in the mid to late stages
of growth, when the chlorophyll concentration is relatively higher. Also, NDVIre can be used to map the within-field variability of nitrogen foliage to understand the fertilizer requirements
of crops.
BAI Number
The Burn Area Index (BAI) uses the reflectance values in the red and NIR portion of the spectrum to identify the areas of the terrain affected by fire. See BAI raster function.
NBR Number
The Normalized Burn Ratio Index (NBRI) uses the NIR and SWIR bands to emphasize burned areas, while mitigating illumination and atmospheric effects. Your images should be corrected to
reflectance values before using this index. See NRB raster function.
NDBI Number
The Normalized Difference Built-up Index (NDBI) uses the NIR and SWIR bands to emphasize manufactured built-up areas. It is ratio based to mitigate the effects of terrain illumination
differences as well as atmospheric effects. NDBI raster function.
NDMI Number
The Normalized Difference Moisture Index (NDMI) is sensitive to the moisture levels in vegetation. It is used to monitor droughts and fuel levels in fire-prone areas. It uses NIR and SWIR
bands to create a ratio designed to mitigate illumination and atmospheric effects. NDMI raster function.
NDSI Number
The Normalized Difference Snow Index (NDSI) is designed to use MODIS (band 4 and band 6) and Landsat TM (band 2 and band 5) for identification of snow cover while ignoring cloud cover. Since
it is ratio based, it also mitigates atmospheric effects. NDSI raster function.
GEMI Number
The Global Environmental Monitoring Index (GEMI) method is a nonlinear vegetation index for global environmental monitoring from satellite imagery. It's similar to NDVI, but it's less
sensitive to atmospheric effects. It is affected by bare soil; therefore, it's not recommended for use in areas of sparse or moderately dense vegetation.
GVITM Number
The Green Vegetation Index (GVI) method was originally designed from Landsat MSS imagery and has been modified for Landsat TM imagery. It's also known as the Landsat TM Tasseled Cap green
vegetation index. It can be used with imagery whose bands share the same spectral characteristics.
PVI String
The Perpendicular Vegetation Index (PVI) method is similar to a difference vegetation index; however, it is sensitive to atmospheric variations. When using this method to compare images, it
should only be used on images that have been atmospherically corrected.
Sultan Number
The Sultan's process takes a six-band 8-bit image and uses the Sultan's Formula method to produce a three-band 8-bit image. The resulting image highlights rock formations called ophiolites on
coastlines. This formula was designed based on the TM or ETM bands of a Landsat 5 or 7 scene.
VARI Number
The Visible Atmospherically Resistant Index (VARI) method is a vegetation index for estimating vegetation fraction quantitatively with only the visible range of the spectrum.
GNDVI Number
The Green Normalized Difference Vegetation Index (GNDVI) method is a vegetation index for estimating photo synthetic activity and is a commonly used vegetation index to determine water and
nitrogen uptake into the plant canopy.
SAVI Number
The Soil-Adjusted Vegetation Index (SAVI) method is a vegetation index that attempts to minimize soil brightness influences using a soil-brightness correction factor. This is often used in
arid regions where vegetative cover is low, and it outputs values between -1.0 and 1.0.
TSAVI Number
The Transformed Soil Adjusted Vegetation Index (TSAVI) method is a vegetation index that minimizes soil brightness influences by assuming the soil line has an arbitrary slope and intercept.
MSAVI Number
The Modified Soil Adjusted Vegetation Index (MSAVI) method minimizes the effect of bare soil on the SAVI.
SR Number
The Simple Ratio (SR) method is a common vegetation index for estimating the amount of vegetation. It is the ratio of light scattered in the NIR and absorbed in red bands, which reduces the
effects of atmosphere and topography.
SRRe Number
The Red-Edge Simple Ratio (SRre) method is a vegetation index for estimating the amount of healthy and stressed vegetation. It is the ratio of light scattered in the NIR and red-edge bands,
which reduces the effects of atmosphere and topography.
MTVI2 Number
The Modified Triangular Vegetation Index (MTVI2) method is a vegetation index for detecting leaf chlorophyll content at the canopy scale while being relatively insensitive to leaf area index.
It uses reflectance in the green, red, and NIR bands.
RTVICore Number
The Red-Edge Triangulated Vegetation Index (RTVICore) method is a vegetation index for estimating leaf area index and biomass. This index uses reflectance in the NIR, red-edge, and green
spectral bands.
CIRe Number
The Chlorophyll Index - Red-Edge (CIre) method is a vegetation index for estimating the chlorophyll content in leaves using the ratio of reflectivity in the NIR and red-edge bands.
CIG Number
Chlorophyll index - Green (CIG) method is a vegetation index for estimating the chlorophyll content in leaves using the ratio of reflectivity in the NIR and green bands.
EVI Number
The Enhanced Vegetation Index (EVI) method is an optimized vegetation index that accounts for atmospheric influences and vegetation background signal. It's similar to NDVI but is less
sensitive to background and atmospheric noise, and it does not become as saturated as NDVI when viewing areas with very dense green vegetation. EVI raster function.
ironOxide Number
The Iron Oxide (ironOxide) ratio method is a geological index for identifying rock features that have experienced oxidation of iron-bearing sulfides using the red and blue bands. It is useful
in identifying iron oxide features below vegetation canopies and is used in mineral composite mapping. ironOxide raster function.
ferrousMinerals Number
The Ferrous Minerals (ferrousMinerals) ratio method is a geological index for identifying rock features containing some quantity of iron-bearing minerals using the SWIR and NIR bands. It is
used in mineral composite mapping. ferrousMinerals raster function.
clayMinerals Number
The Clay Minerals (clayMinerals) ratio method is a geological index for identifying mineral features containing clay and alunite using two shortwave infrared (SWIR) bands. It is used in
mineral composite mapping. See clayMinerals raster function.
NDWI Number
The Normalized Difference Water Index (NDWI) method is an index for delineating and monitoring content changes in surface water. It is computed with the NIR and green bands. See NDWI raster
WNDWI Number
The Weighted Normalized Difference Water Index (WNDWI) method is a water index developed to reduce errors typically encountered in other water indices, including water turbidity, small water
bodies, or shadow in remote sensing scenes.
MNDWI Number
The Modified Normalized Difference Water Index (MNDWI) uses green and SWIR bands for the enhancement of open water features. It also diminishes built-up area features that are often
correlated with open water in other indices.
Property cellStatisticalOperation Objectreadonly
The local cell statistics operations type constants. This function calculates a statistic on a pixel-by-pixel basis. Refer to the Local raster functions for more info.
majority Number
Determines the majority (value that occurs most often) of the inputs.
max Number
Determines the maximum (largest value) of the inputs.
mean Number
Determines the mean (average value) of the inputs.
med Number
Calculates the median of the inputs.
min Number
Determines the minimum (smallest value) of the inputs.
minority Number
Determines the minority (value that occurs least often) of the inputs.
range Number
Calculates the range (difference between largest and smallest value) of the inputs.
stddev Number
Calculates the standard deviation of the inputs.
sum Number
Calculates the sum (total of all values) of the inputs.
variety Number
Calculates the variety (number of unique values) of the inputs.
Determines the majority (value that occurs most often) of the inputs. Only cells that have data values will be used in determining the statistic value.
maxIgnoreNoData Number
Determines the maximum (largest value) of the inputs. Only cells that have data values will be used in determining the statistic value.
meanIgnoreNoData Number
Determines the mean (average value) of the inputs. Only cells that have data values will be used in determining the statistic value.
medIgnoreNoData Number
Determines the median of the inputs. Only cells that have data values will be used in determining the statistic value.
minIgnoreNoData Number
Determines the minimum (smallest value) of the inputs. Only cells that have data values will be used in determining the statistic value.
Determines the minority (value that occurs least often) of the inputs. Only cells that have data values will be used in determining the statistic value.
rangeIgnoreNoData Number
Calculates the range (difference between largest and smallest value) of the inputs. Only cells that have data values will be used in determining the statistic value.
Calculates the standard deviation of the inputs. Only cells that have data values will be used in determining the statistic value.
sumIgnoreNoData Number
Calculates the sum (total of all values) of the inputs. Only cells that have data values will be used in determining the statistic value.
Calculates the variety (number of unique values) of the inputs. Only cells that have data values will be used in determining the statistic value.
colorRampName Objectreadonly
aspect String
blackToWhite String
Black to White.
blueBright String
Blue Bright.
blueLightToDark String
Blue Light to Dark.
blueGreenBright String
Blue-Green Bright.
Blue-Green Light to Dark.
brownLightToDark String
Brown Light to Dark.
brownToBlueGreenDivergingBright String
Brown to Blue Green Diverging. Bright.
brownToBlueGreenDivergingDark String
Brown to Blue Green Diverging. Dark.
coefficientBias String
Coefficient Bias.
Cold to Hot Diverging.
conditionNumber String
Condition Number.
cyanToPurple String
Cyan to Purple.
Cyan-Light to Blue-Dark.
distance String
elevation1 String
Elevation #1.
elevation2 String
Elevation #2.
errors String
grayLightToDark String
Gray Light to Dark.
greenBright String
Green Bright.
greenLightToDark String
Green Light to Dark.
greenToBlue String
Green to Blue.
orangeBright String
Orange Bright.
orangeLightToDark String
Orange Light to Dark.
partialSpectrum String
Partial Spectrum.
partialSpectrum1Diverging String
Partial Spectrum 1 Diverging.
partialSpectrum2Diverging String
Partial Spectrum 2 Diverging.
pinkToYellowGreenDivergingBright String
Pink to YellowGreen Diverging. Bright.
pinkToYellowGreenDivergingDark String
Pink to YellowGreen Diverging. Dark.
precipitation String
prediction String
purpleBright String
Purple Bright.
purpleToGreenDivergingBright String
Purple to Green Diverging. Bright.
purpleToGreenDivergingDark String
Purple to Green Diverging. Dark.
purpleBlueBright String
Purple-Blue Bright.
Purple-Blue Light to Dark.
purpleRedBright String
Purple-Red Bright.
Purple-Red Light to Dark.
redBright String
Red Bright.
redLightToDark String
Red Light to Dark.
redToBlueDivergingBright String
Red to Blue Diverging. Bright.
Red to Blue Diverging. Dark.
redToGreen String
Red to Green.
redToGreenDivergingBright String
Red to Green Diverging. Bright.
redToGreenDivergingDark String
Red to Green Diverging. Dark.
slope String
Spectrum-Full Bright.
spectrumFullDark String
Spectrum-Full Dark.
spectrumFullLight String
Spectrum-Full Light.
surface String
temperature String
whiteToBlack String
White to Black.
yellowToDarkRed String
Yellow to Dark Red.
yellowToGreenToDarkBlue String
Yellow to Green to Dark Blue.
yellowToRed String
Yellow to Red.
yellowGreenBright String
Yellow-Green Bright.
Yellow-Green Light to Dark.
colormapName Objectreadonly
random String
A random colormap.
NDVI String
colormap to visualize vegetation. Values near zero are blue. Low values are brown. Then the colors gradually change from red. to orange. to yellow. to green. and to black as the vegetation
index goes from low to high.
NDVI2 String
A colormap to visualize vegetation. Low values range from white to green. Then the colors range from gray. to purple. to violet. to dark blue. and to black as the vegetation index goes from
low to high.
NDVI3 String
A colormap to visualize vegetation. Values near zero are blue. Then the colors gradually change from red. to orange. and to green as the vegetation index goes from low to high.
elevation String
A color map that gradually changes from cyan to purple to black.
gray String
A color map that gradually changes from black to white.
hillshade String
A colormap to visualize a hillshade product. It has a color scheme that gradually changes from black to white depending on topography.
Property convolutionKernel Objectreadonly
Kernel type constants used for the Convolution raster function. Gradient filters can be used for edge detection in 45 degree increments. Laplacian filters are often used for edge detection. They
are often applied to an image that has first been smoothed to reduce its sensitivity to noise. Line detection filters. like the gradient filters. can be used to perform edge detection. The Sobel
filter is used for edge detection.
User defined kernel type.
lineDetectionHorizontal Number
Horizontal line detection. Line detection filters. like the gradient filters. can be used to perform edge detection.
Vertical line detection. Line detection filters. like the gradient filters. can be used to perform edge detection.
lineDetectionLeftDiagonal Number
Left diagonal line detection. Line detection filters. like the gradient filters. can be used to perform edge detection.
lineDetectionRightDiagonal Number
Right diagonal line detection. Line detection filters. like the gradient filters. can be used to perform edge detection.
gradientNorth Number
North gradient detection. Gradient filters can be used for edge detection in 45 degree increments.
gradientWest Number
West gradient detection. Gradient filters can be used for edge detection in 45 degree increments.
gradientEast Number
East gradient detection. Gradient filters can be used for edge detection in 45 degree increments.
gradientSouth Number
South gradient detection. Gradient filters can be used for edge detection in 45 degree increments.
gradientNorthEast Number
North east gradient detection. Gradient filters can be used for edge detection in 45 degree increments.
gradientNorthWest Number
North west gradient detection. Gradient filters can be used for edge detection in 45 degree increments.
Smooths the data by reducing local variation and removing noise. Calculates the average (mean) value for each neighborhood. The effect is that the high and low values within each neighborhood
are averaged out. reducing the extreme values in the data.
smoothing3x3 Number
Smooths (low-pass) the data by reducing local variation and removing noise. Calculates the average (mean) value for each neighborhood. The effect is that the high and low values within each
neighborhood are averaged out. reducing the extreme values in the data.
smoothing5x5 Number
Smooths (low-pass) the data by reducing local variation and removing noise. Calculates the average (mean) value for each neighborhood. The effect is that the high and low values within each
neighborhood are averaged out. reducing the extreme values in the data.
sharpen Number
Sharpens the date by calculating the focal sum statistic for each cell of the input using a weighted kernel neighborhood. It brings out the boundaries between features (for example. where a
water body meets the forest). thus sharpening edges between objects.
sharpen2 Number
Sharpens the date by calculating the focal sum statistic for each cell of the input using a weighted kernel neighborhood. It brings out the boundaries between features (for example. where a
water body meets the forest). thus sharpening edges between objects.
sharpening3x3 Number
Sharpens the date by calculating the focal sum statistic for each cell of the input using a weighted kernel neighborhood. It brings out the boundaries between features (for example. where a
water body meets the forest). thus sharpening edges between objects.
sharpening5x5 Number
Sharpens the date by calculating the focal sum statistic for each cell of the input using a weighted kernel neighborhood. It brings out the boundaries between features (for example. where a
water body meets the forest). thus sharpening edges between objects.
laplacian3x3 Number
Laplacian filters are often used for edge detection. They are often applied to an image that has first been smoothed to reduce its sensitivity to noise.
laplacian5x5 Number
Laplacian filters are often used for edge detection. They are often applied to an image that has first been smoothed to reduce its sensitivity to noise.
sobelHorizontal Number
The horizontal Sobel filter is used for edge detection.
sobelVertical Number
The vertical Sobel filter is used for edge detection.
pointSpread Number
The point spread function portrays the distribution of light from a point source through a lense. This will introduce a slight blurring effect.
No kernel type is specified.
curvatureType Objectreadonly
Since: ArcGIS Maps SDK for JavaScript 4.31 rasterFunctionConstants since 4.26, curvatureType added at 4.31.
standard Number
Combines the profile and planform curvatures.
planform Number
Is perpendicular to the direction of the maximum slope. It affects the convergence and divergence of flow across a surface.
profile Number
Is parallel to the slope and indicates the direction of maximum slope. It affects the acceleration and deceleration of flow across the surface.
hillshadeType Objectreadonly
Since: ArcGIS Maps SDK for JavaScript 4.31 rasterFunctionConstants since 4.26, hillshadeType added at 4.31.
traditional Number
Calculates hillshade from a single illumination direction. You can set the Azimuth and Altitude options to control the location of the light source.
multidirectional Number
Combines light from multiple sources to represent an enhanced visualization of the terrain.
Property localArithmeticOperation Object
plus Number
Adds (sums) the values of two rasters on a cell-by-cell basis.
minus Number
Subtracts the value of the second input raster from the value of the first input raster on a cell-by-cell basis.
times Number
Multiplies the values of two rasters on a cell-by-cell basis.
sqrt Number
Calculates the square root of the cell values in a raster.
power Number
Raises the cell values in a raster to the power of the values found in another raster.
abs Number
Calculates the absolute value of the cells in a raster.
divide Number
Divides the values of two rasters on a cell-by-cell basis.
exp Number
Calculates the base e exponential of the cells in a raster.
exp10 Number
Calculates the base 10 exponential of the cells in a raster.
exp2 Number
Calculates the base 2 exponential of the cells in a raster.
int Number
Converts each cell value of a raster to an integer by truncation.
float Number
Converts each cell value of a raster into a floating-point representation.
ln Number
Calculates the natural logarithm (base e) of cells in a raster.
log10 Number
Calculates the base 10 logarithm of cells in a raster.
log2 Number
Calculates the base 2 logarithm of cells in a raster.
mod Number
Finds the remainder (modulo) of the first raster when divided by the second raster on a cell-by-cell basis.
negate Number
Changes the sign (multiplies by -1) of the cell values of the input raster on a cell-by-cell basis.
roundDown Number
Returns the next lower integer value. just represented as a floating point. for each cell in a raster.
roundUp Number
Returns the next higher integer value. just represented as a floating point. for each cell in a raster.
square Number
Calculates the square of the cell values in a raster.
Property localConditionalOperation Objectreadonly
setNull Number
Set Null sets identified cell locations to NoData based on a specified criteria. It returns NoData if a conditional evaluation is true, and returns the value specified by another raster if it
is false.
conditional Number
Performs a conditional If, Then, Else operation. When a Con operator is used, there usually needs to be two or more local functions chained together, where one local function states the
criteria and the second local function is the Con operator which uses the criteria and dictates what the true and false outputs should be.
Property localLogicalOperation Objectreadonly
bitwiseAnd Number
Performs a Bitwise And operation on the binary values of two input rasters.
bitwiseLeftShift Number
Performs a Bitwise Left Shift operation on the binary values of two input rasters.
bitwiseNot Number
Performs a Bitwise Not (complement) operation on the binary value of an input raster.
bitwiseOr Number
Performs a Bitwise Or operation on the binary values of two input rasters.
bitwiseRightShift Number
Performs a Bitwise Right Shift operation on the binary values of two input rasters.
bitwiseXOr Number
Performs a Bitwise eXclusive Or operation on the binary values of two input rasters.
booleanAnd Number
Performs a Boolean And operation on the cell values of two input rasters.
booleanNot Number
Performs a Boolean Not (complement) operation on the cell values of the input raster.
booleanOr Number
Performs a Boolean Or operation on the cell values of two input rasters.
booleanXOr Number
Performs a Boolean eXclusive Or operation on the cell values of two input rasters.
equalTo Number
Performs a Relational equal-to operation on two inputs on a cell-by-cell basis.
greaterThan Number
Performs a Relational greater-than operation on two inputs on a cell-by-cell basis.
greaterThanEqual Number
Performs a Relational greater-than-or-equal-to operation on two inputs on a cell-by-cell basis.
lessThan Number
Performs a Relational less-than operation on two inputs on a cell-by-cell basis.
lessThanEqual Number
Performs a Relational less-than-or-equal-to operation on two inputs on a cell-by-cell basis.
isNull Number
Determines which values from the input raster are NoData on a cell-by-cell basis.
notEqual Number
Performs a Relational not-equal-to operation on two inputs on a cell-by-cell basis.
Property localTrigonometricOperation Objectreadonly
acos Number
Calculates the inverse cosine of cells in a raster.
asin Number
Calculates the inverse sine of cells in a raster.
atan Number
Calculates the inverse tangent of cells in a raster.
atanh Number
Calculates the inverse hyperbolic tangent of cells in a raster.
cos Number
Calculates the cosine of cells in a raster.
cosh Number
Calculates the hyperbolic cosine of cells in a raster.
sin Number
Calculates the sine of cells in a raster.
sinh Number
Calculates the hyperbolic sine of cells in a raster.
tan Number
Calculates the tangent of cells in a raster.
tanh Number
Calculates the hyperbolic tangent of cells in a raster.
acosh Number
Calculates the inverse hyperbolic cosine of cells in a raster.
asinh Number
Calculates the inverse hyperbolic sine of cells in a raster.
atan2 Number
Calculates the inverse tangent (based on x,y) of cells in a raster.
Property missingBandAction Objectreadonly
bestMatch Number
Finds the best available band to use in place of the missing band based on wavelength. so that the function will not fail.
fail Number
If the input dataset is missing any band specified in the Band parameter. the function will fail.
Property noDataInterpretation Objectreadonly
The NoData interpretation constants used for the Mask raster function. This refers to how NoData Values will impact the output image.
matchAny Number
If the NoData value you specify occurs for a cell in a specified band. that cell in the output image will be NoData.
matchAll Number
The NoData values you specify for each band must occur in the same pixel for the output image to contain the NoData pixel.
slopeType Objectreadonly
degree Number
The inclination of slope is calculated in degrees. The values range from 0 to 90.
percentRise Number
The inclination of slope is calculated as percentage values. The values range from 0 to infinity. A flat surface is 0 percent rise. whereas a 45-degree surface is 100 percent rise. As the
surface becomes more vertical. the percent rise becomes increasingly larger.
adjusted Number
The inclination of slope is calculated the same as DEGREE. but the z-factor is adjusted for scale. It uses the Pixel Size Power (PSP) and Pixel Size Factor (PSF) values. which account for the
resolution changes (scale) as the viewer zooms in and out.
stretchType Objectreadonly
If the stretch type is None. no stretch method will be applied. even if statistics exist.
standardDeviation Number
The standard deviation stretch type applies a linear stretch between the values defined by the standard deviation (n) value.
The histogram equalization stretch type.
minMax Number
The minMax stretch type applies a linear stretch based on the output minimum and output maximum pixel values. which are used as the endpoints for the histogram.
percentClip Number
The percent clip stretch type applies a linear stretch between the defined percent clip minimum and percent clip maximum pixel values.
sigmoid Number
The Sigmoid contrast stretch is designed to highlight moderate pixel values in your imagery while maintaining sufficient contrast at the extremes.
|
{"url":"https://developers.arcgis.com/javascript/latest/api-reference/esri-layers-support-rasterFunctionConstants.html","timestamp":"2024-11-12T13:58:40Z","content_type":"text/html","content_length":"549967","record_id":"<urn:uuid:0112db00-b8d6-43d3-b89b-a526115c4f5b>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00313.warc.gz"}
|
What is Hierarchical Clustering in Python?
In the vast landscape of data exploration, where datasets sprawl like forests, hierarchical clustering acts as a guiding light, leading us through the dense thicket of information. Imagine a
dendrogram, a visual representation of data relationships, branching out like a tree, revealing clusters and connections within the data. This is where machine learning meets the art of clustering,
where Python serves as the wizard’s wand, casting spells of insight into the heart of datasets.
In this journey through the Python kingdom, we will unravel the mysteries of hierarchical clustering, exploring its intricacies and applications in data science. From dendrograms to distance
matrices, from agglomerative to divisive clustering, we will delve deep into the techniques and methods that make hierarchical clustering a cornerstone of data analysis.
Join us as we embark on this adventure, where data points become nodes in a vast knowledge network, and clusters emerge like constellations in the night sky, guiding us toward the insights hidden
within the data. Welcome to the world of hierarchical clustering in Python, where every cluster tells a story, and every dendrogram holds the key to unlocking the secrets of data science.
Study Material
• There are multiple ways to perform clustering. I encourage you to check out our awesome guide to the different types of clustering: An Introduction to Clustering and Different Methods of
• To learn more about clustering and other machine learning algorithms (both supervised and unsupervised), check out the following comprehensive program- Certified AI & ML Blackbelt+ Program
Quiz Time
Welcome to the quiz on Hierarchical Clustering! Test your knowledge about this clustering technique and its key concepts.
What is Hierarchical Clustering?
Hierarchical clustering is an unsupervised learning technique for grouping similar objects into clusters. It creates a hierarchy of clusters by merging or splitting them based on similarity measures.
It uses a bottom-up or top-down approach to construct a hierarchical data clustering schema.
Clustering Hierarchical groups similar objects into a dendrogram. It merges similar clusters iteratively, starting with each data point as a separate cluster. This creates a tree-like structure that
shows the relationships between clusters and their hierarchy.
The dendrogram from hierarchical clustering reveals the hierarchy of clusters at different levels, highlighting natural groupings in the data. It provides a visual representation of the relationships
between clusters, helping to identify patterns and outliers, making it a valuable tool for exploratory data analysis. For example, let’s say we have the below points, and we want to cluster them into
We can assign each of these points to a separate cluster:
Now, based on the similarity of these clusters, we can combine the most similar clusters and repeat this process until only a single cluster is left:
We are essentially building a hierarchy of clusters. That’s why this algorithm is called hierarchical clustering. I will discuss how to decide the number of clusters later. For now, let’s look at the
different types of hierarchical clustering.
Also Read: Python Interview Questions to Ace Your Next Job Interview in 2024
Types of Hierarchical Clustering
There are mainly two types of hierarchical clustering:
• Agglomerative hierarchical clustering
• Divisive Hierarchical clustering
Let’s understand each type in detail.
Agglomerative Clustering Hierarchical
In this technique, we assign each point to an individual cluster. Suppose there are 4 data points. We will assign each of these points to a cluster and hence will have 4 clusters in the beginning:
Then, at each iteration, we merge the closest pair of clusters and repeat this step until only a single cluster is left:
We are merging (or adding) the clusters at each step, right? Hence, this type of clustering is also known as additive hierarchical clustering.
Divisive Hierarchical Clustering
Divisive Clustering Hierarchical works in the opposite way. Instead of starting with n clusters (in case of n observations), we start with a single cluster and assign all the points to that cluster.
So, it doesn’t matter if we have 10 or 1000 data points. All these points will belong to the same cluster at the beginning:
Now, at each iteration, we split the farthest point in the cluster and repeat this process until each cluster only contains a single point:
We are splitting (or dividing) the clusters at each step, hence the name divisive hierarchical clustering.
Agglomerative Clustering is widely used in the industry and will be the article’s focus. Divisive hierarchical clustering will be a piece of cake once we have a handle on the agglomerative type
Also Read: Python Tutorial to Learn Data Science from Scratch
Applications of Hierarchical Clustering
Here are some common applications of hierarchical clustering:
1. Biological Taxonomy: Hierarchical clustering is extensively used in biology to classify organisms into hierarchical taxonomies based on similarities in genetic or phenotypic characteristics. It
helps understand evolutionary relationships and biodiversity.
2. Document Clustering: In natural language processing, hierarchical clustering groups similar documents or texts. It aids in topic modelling, document organization, and information retrieval
3. Image Segmentation: Hierarchical clustering segments images by grouping similar pixels or regions based on colour, texture, or other visual features. It finds applications in medical imaging,
remote sensing, and computer vision.
4. Customer Segmentation: Businesses use hierarchical clustering to group customers into groups based on their purchasing behaviours, demographics, or preferences. This helps with targeted
marketing, personalized recommendations, and customer relationship management.
5. Anomaly Detection: Hierarchical clustering can identify outliers or anomalies in datasets by isolating data points that do not fit well into any cluster. It is useful in fraud detection, network
security, and quality control.
6. Social Network Analysis: Hierarchical clustering helps uncover community structures or hierarchical relationships in social networks by clustering users based on their interactions, interests, or
affiliations. It aids in understanding network dynamics and identifying influential users.
7. Market Basket Analysis: Retailers use hierarchical clustering to analyze transaction data and identify associations between products frequently purchased together. It enables them to optimize
product placements, promotions, and cross-selling strategies.
Advantages and Disadvantages of Hierarchical Clustering
Here are some advantages and disadvantages of hierarchical clustering:
Advantages of hierarchical clustering:
1. Easy to interpret: Hierarchical clustering produces a dendrogram, a tree-like structure that shows the order in which clusters are merged. This dendrogram provides a clear visualization of the
relationships between clusters, making the results easy to interpret.
2. No need to specify the number of clusters: Unlike other clustering algorithms, such as k-means, hierarchical clustering does not require you to specify the number of clusters beforehand. The
algorithm determines the number of clusters based on the data and the chosen linkage method.
3. Captures nested clusters: Hierarchical clustering captures the hierarchical structure in the data, meaning it can identify clusters within clusters (nested clusters). This can be useful when the
data naturally forms a hierarchy.
4. Robust to noise: Hierarchical clustering is robust to noise and outliers because it considers the entire dataset when forming clusters. Outliers may not significantly affect the clustering
process, especially if a suitable distance metric and linkage method are chosen.
Disadvantages of hierarchical clustering:
1. Computational complexity: Hierarchical clustering can be computationally expensive, especially for large datasets. The time complexity of hierarchical clustering algorithms is typically 𝑂
(𝑛2log𝑛)O(n2logn) or 𝑂(𝑛3)O(n3), where 𝑛n is the number of data points.
2. Memory usage: Besides computational complexity, hierarchical clustering algorithms can consume a lot of memory, particularly when dealing with large datasets. Storing the entire distance matrix
between data points can require substantial memory.
3. Difficulty with large datasets: Due to its computational complexity and memory requirements, hierarchical clustering may not be suitable for large datasets. In such cases, alternative clustering
methods, such as k-means or DBSCAN, may be more appropriate.
4. Sensitive to noise and outliers: While hierarchical clustering is generally robust to noise and outliers, extreme outliers or noise points can still affect the clustering results, especially if
they are not handled properly beforehand.
5. Difficulty in merging clusters: Once clusters are formed in hierarchical clustering, merging or splitting them can be difficult, especially if the clustering uses a divisive method. This lack of
flexibility can be a limitation in certain scenarios where cluster adjustments are needed.
Application of Hierarchical Clustering with Python
In Python, the scipy and scikit-learn libraries are often used to perform hierarchical clustering. Here’s how you can apply hierarchical clustering using Python:
1. Import Necessary Libraries: First, you’ll need to import the necessary libraries: numpy for numerical operations, matplotlib for plotting, and scipy.cluster.hierarchy for hierarchical clustering.
2. Generate or Load Data: You can generate a synthetic dataset or load your dataset.
3. Compute the Distance Matrix: Compute the distance matrix, which will be used to form clusters.
4. Perform Hierarchical Clustering: Use the linkage method to perform hierarchical clustering.
5. Plot the Dendrogram: Visualize the clusters using a dendrogram.
Here’s an example of hierarchical clustering using Python:
import numpy as np
import matplotlib.pyplot as plt
from scipy.cluster.hierarchy import dendrogram, linkage
from scipy.cluster.hierarchy import fcluster
from sklearn.datasets import make_blobs
# Generate sample data
X, y = make_blobs(n_samples=100, centers=3, cluster_std=0.60, random_state=0)
# Compute the linkage matrix
Z = linkage(X, 'ward')
# Plot the dendrogram
plt.figure(figsize=(10, 7))
plt.xlabel("Sample index")
# Determine the clusters
max_d = 7.0 # this can be adjusted based on the dendrogram
clusters = fcluster(Z, max_d, criterion='distance')
# Plot the clusters
plt.figure(figsize=(10, 7))
plt.scatter(X[:, 0], X[:, 1], c=clusters, cmap='prism')
plt.title("Hierarchical Clustering")
plt.xlabel("Feature 1")
plt.ylabel("Feature 2")
Supervised vs Unsupervised Learning
Understanding the difference between supervised and unsupervised learning is important before we dive into the Clustering hierarchy. Let me explain this difference using a simple example.
Suppose we want to estimate the count of bikes that will be rented in a city every day:
Or, let’s say we want to predict whether a person on board the Titanic survived or not:
• In the first example, we have to predict the number of bikes based on features like the season, holiday, working day, weather, and temperature.
• In the second example, we are predicting whether a passenger survived. In the ‘Survived’ variable, 0 represents that the person did not survive, and one means the person did make it out alive.
The independent variables here include Pclass, Sex, Age, Fare, etc.
Let’s look at the figure below to understand this visually:
Here, y is our dependent or target variable, and X represents the independent variables. The target variable is dependent on X, also called a dependent variable. We train our model using the
independent variables to supervise the target variable. Hence, the name supervised learning.
When training the model, we aim to generate a function that maps the independent variables to the desired target. Once the model is trained, we can pass new sets of observations, and the model will
predict their target. This, in a nutshell, is supervised learning.
In these cases, we try to divide the entire data into a set of groups. These groups are known as clusters, and the process of making them is known as clustering.
This technique is generally used to cluster a population into different groups. Some common examples include segmenting customers, clustering similar documents, and recommending similar songs or
There are many more applications of unsupervised learning. If you come across any interesting ones, feel free to share them in the comments section below!
Various algorithms help us make these clusters. The most commonly used clustering algorithms are K-means and Hierarchical clustering.
Why Hierarchical Clustering?
We should first understand how K-means works before we explore hierarchical clustering. Trust me, it will make the concept much easier.
Here’s a brief overview of how K-means works:
• Decide the number of clusters (k)
• Select k random points from the data as centroids
• Assign all the points to the nearest cluster centroid
• Calculate the centroid of newly formed clusters
• Repeat steps 3 and 4
It is an iterative process. It will continue until the centroids of newly formed clusters do not change or the maximum number of iterations is reached.
However, there are certain challenges with K-means. It always tries to make clusters of the same size. Also, we have to decide the number of clusters at the beginning of the algorithm. Ideally, we
would not know how many clusters we should have at the beginning of the algorithm, and hence, it is a challenge with K-means.
This is a gap hierarchical clustering bridge with aplomb. It takes away the problem of having to pre-define the number of clusters. Sounds like a dream! So, let’s see what hierarchical clustering is
and how it improves on K-means.
How Does Hierarchical Clustering Improve on K-means?
Hierarchical clustering and K-means are popular clustering algorithms with different strengths and weaknesses. Here are some ways in which hierarchical clustering can improve on K-means:
1. No Need to Pre-specify Number of Clusters
Hierarchical Clustering:
• Does not require the number of clusters (k) to be specified in advance.
• The dendrogram provides a visual representation of the hierarchy of clusters, and the number of clusters can be determined by cutting the dendrogram at a desired level.
• It requires specifying the number of clusters (k) beforehand, which can be difficult if the optimal number of clusters is unknown.
2. Captures Nested Clusters
Hierarchical Clustering:
• It can identify nested clusters, meaning it can find clusters within them.
• This is useful for datasets with a natural hierarchical structure (e.g., taxonomy of biological species).
• Assumes clusters are flat and do not capture hierarchical relationships.
3. Flexibility with Cluster Shapes
Hierarchical Clustering:
• Can find clusters of arbitrary shapes.
• The algorithm is not restricted to spherical clusters and can capture more complex cluster structures.
• Assumes clusters are spherical and of similar size, which may not be suitable for datasets with irregularly shaped clusters.
4. Distance Metrics and Linkage Criteria
Hierarchical Clustering:
• Offers flexibility in distance metrics (e.g., Euclidean, Manhattan) and linkage criteria (e.g., single, complete, average).
• This flexibility can improve clustering performance on different types of data.
• Typically, it uses the Euclidean distance, which may not be suitable for all data types.
5. Handling Outliers
Hierarchical Clustering:
• Outliers can be identified as singleton clusters at the bottom of the dendrogram.
• This makes it easier to detect and potentially remove outliers.
• Sensitive to outliers, as they can significantly affect the position of cluster centroids.
6. Robustness to Initialization
Hierarchical Clustering:
• Does not require random initialization of cluster centroids.
• The clustering result is deterministic and does not depend on initial conditions.
• Requires random initialization of centroids, leading to different clustering results in different runs.
• The algorithm may converge to local minima, depending on the initial placement of centroids.
7. Visual Interpretation
Hierarchical Clustering:
• The dendrogram provides a visual and interpretable representation of the clustering process.
• It helps in understanding the relationships between clusters and the data structure.
• Provides cluster labels and centroids but does not visually represent the clustering process.
Practical Example
Let’s consider a practical example using hierarchical clustering and K-means on a simple dataset:
import numpy as np
import matplotlib.pyplot as plt
from scipy.cluster.hierarchy import dendrogram, linkage
from sklearn.datasets import make_blobs
from sklearn.cluster import KMeans
# Generate sample data
X, y = make_blobs(n_samples=100, centers=3, cluster_std=0.60, random_state=0)
# Hierarchical Clustering
Z = linkage(X, 'ward')
plt.figure(figsize=(10, 7))
plt.title("Hierarchical Clustering Dendrogram")
# K-means Clustering
kmeans = KMeans(n_clusters=3, random_state=0).fit(X)
labels = kmeans.labels_
plt.figure(figsize=(10, 7))
plt.scatter(X[:, 0], X[:, 1], c=labels, cmap='prism')
plt.title("K-means Clustering")
Steps to Perform Hierarchical Clustering
We know that in hierarchical clustering, we merge the most similar points or clusters. Now the question is: How do we decide which points are similar and which are not? It’s one of the most important
questions in clustering!
Here’s one way to calculate similarity – Take the distance between the centroids of these clusters. The points with the least distance are referred to as similar points, and we can merge them. We can
also refer to this as a distance-based algorithm (since we are calculating the distances between the clusters).
In hierarchical clustering, we have a concept called a proximity matrix. This stores the distances between each point. Let’s take an example to understand this matrix and the steps to perform
hierarchical clustering.
Setting up the Example
Suppose a teacher wants to divide her students into different groups. She has the marks scored by each student in an assignment, and based on these marks, she wants to segment them into groups.
There’s no fixed target here as to how many groups to have. Since the teacher does not know what type of students should be assigned to which group, it cannot be solved as a supervised learning
problem. So, we will try to apply hierarchical clustering here and segment the students into different groups.
Let’s take a sample of 5 students:
Creating a Proximity Matrix
First, we will create a proximity matrix that tells us the distance between each of these points. Since we are calculating the distance of each point from each of the other points, we will get a
square matrix of shape n X n (where n is the number of observations).
Let’s make the 5 x 5 proximity matrix for our example:
The diagonal elements of this matrix will always be 0 as the distance of a point with itself is always 0. We will use the Euclidean distance formula to calculate the rest of the distances. So, let’s
say we want to calculate the distance between points 1 and 2:
√(10-7)^2 = √9 = 3
Similarly, we can calculate all the distances and fill the proximity matrix.
Steps to Perform Hierarchical Clustering
Step 1: First, we assign all the points to an individual cluster:
Different colours here represent different clusters. You can see that we have 5 different clusters for the 5 points in our data.
Step 2: Next, we will look at the smallest distance in the proximity matrix and merge the points with the smallest distance. We then update the proximity matrix:
Here, the smallest distance is 3 and hence we will merge point 1 and 2:
Let’s look at the updated clusters and accordingly update the proximity matrix:
Here, we have taken the maximum of the two marks (7, 10) to replace the marks for this cluster. Instead of the maximum, we can also take the minimum value or the average values as well. Now, we will
again calculate the proximity matrix for these clusters:
Step 3: We will repeat step 2 until only a single cluster is left.
So, we will first look at the minimum distance in the proximity matrix and then merge the closest pair of clusters. We will get the merged clusters as shown below after repeating these steps:
We started with 5 clusters and finally had a single cluster. This is how agglomerative hierarchical clustering works. But the burning question remains—how do we decide the number of clusters? Let’s
understand that in the next section.
How to Choose the Number of Clusters in Hierarchical Clustering?
Are you ready to finally answer this question that has been hanging around since we started learning? To get the number of clusters for hierarchical clustering, we use an awesome concept called a
A dendrogram is a tree-like diagram that records the sequences of merges or splits.
Let’s get back to the teacher-student example. Whenever we merge two clusters, a dendrogram will record the distance between them and represent it in graph form. Let’s see how a dendrogram looks:
We have the samples of the dataset on the x-axis and the distance on the y-axis. Whenever two clusters are merged, we will join them in this dendrogram, and the height of the join will be the
distance between these points. Let’s build the dendrogram for our example:
Take a moment to process the above image. We started by merging samples 1 and 2, and the distance between these two samples was 3 (refer to the first proximity matrix in the previous section). Let’s
plot this in the dendrogram:
Here, we can see that we have merged samples 1 and 2. The vertical line represents the distance between these samples. Similarly, we plot all the steps where we merged the clusters, and finally, we
get a dendrogram like this:
We can visualize the steps of hierarchical clustering. The more the distance of the vertical lines in the dendrogram, the more the distance between those clusters.
Now, we can set a threshold distance and draw a horizontal line (Generally, we try to set the threshold so that it cuts the tallest vertical line). Let’s set this threshold as 12 and draw a
horizontal line:
The number of clusters will be the number of vertical lines intersected by the line drawn using the threshold. In the above example, since the red line intersects two vertical lines, we will have 2
clusters. One cluster will have a sample (1,2,4), and the other will have a sample (3,5).
Solving the Wholesale Customer Segmentation Problem
Time to get our hands dirty in Python!
We will be working on a wholesale customer segmentation problem. You can download the dataset using this link. The data is hosted on the UCI Machine Learning repository. This problem aims to segment
a wholesale distributor’s clients based on their annual spending on diverse product categories, like milk, grocery, region, etc.
Let’s explore the data first and then apply Hierarchical Clustering to segment the clients.
Required Libraries
Load the data and look at the first few rows:
Python Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
#%matplotlib inline
data = pd.read_csv('Wholesale customers data.csv')
There are multiple product categories—fresh, Milk, Grocery, etc. The values represent the number of units each client purchases for each product. We aim to make clusters from this data to segment
similar clients. Of course, we will use Hierarchical Clustering for this problem.
But before applying, we have to normalize the data so that the scale of each variable is the same. Why is this important? If the scale of the variables is not the same, the model might become biased
towards the variables with a higher magnitude, such as fresh or milk (refer to the above table).
So, let’s first normalize the data and bring all the variables to the same scale:
Here, we can see that the scale of all the variables is almost similar. Now, we are good to go. Let’s first draw the dendrogram to help us decide the number of clusters for this particular problem:
The x-axis contains the samples and y-axis represents the distance between these samples. The vertical line with maximum distance is the blue line and hence we can decide a threshold of 6 and cut the
We have two clusters as this line cuts the dendrogram at two points. Let’s now apply hierarchical clustering for 2 clusters:
We can see the values of 0s and 1s in the output since we defined 2 clusters. 0 represents the points that belong to the first cluster and 1 represents points in the second cluster. Let’s now
visualize the two clusters:
Awesome! We can visualize the two clusters here. This is how we can implement hierarchical clustering in Python.
In our journey, we’ve uncovered a powerful tool for unraveling the complexities of data relationships. From the conceptual elegance of dendrograms to their practical applications in diverse fields
like biology, document analysis, cluster analysis, and customer segmentation, hierarchical cluster analysis emerges as a guiding light in the labyrinth of data exploration.
As we conclude this expedition, we stand at the threshold of possibility, where every cluster tells a story, and every dendrogram holds the key to unlocking the secrets of data science. In the
ever-expanding landscape of Python and machine learning, hierarchical clustering stands as a stalwart companion, guiding us toward new horizons of discovery and understanding.
If you are still relatively new to data science, I highly recommend taking the Applied Machine Learning course. It is one of the most comprehensive end-to-end machine learning courses you will find
anywhere. Hierarchical clustering is just one of the diverse topics we cover in the course.
What are your thoughts on hierarchical clustering? Do you feel there’s a better way to create clusters using less computational resources? Connect with me in the comments section below, and let’s
Frequently Asked Questions?
Q1. What is hierarchical K clustering?
A. Hierarchical K clustering is a method of partitioning data into K clusters where each cluster contains similar data points organized in a hierarchical structure.
Q2. What is an example of a hierarchical cluster?
A. An example of a hierarchical cluster could be grouping customers based on their purchasing behavior, where clusters are formed based on similarities in purchasing patterns, leading to a
hierarchical tree-like structure.
Q3. What are the two methods of hierarchical clustering?
A. The two methods of hierarchical clustering are:
1. Agglomerative hierarchical clustering: It starts with each data point as a separate cluster and merges the closest clusters together until only one cluster remains.
2. Divisive hierarchical clustering: It begins with all data points in one cluster and recursively splits the clusters into smaller ones until each data point is in its cluster.
Q4. What is hierarchical clustering of features?
A. Hierarchical clustering of features involves clustering features or variables instead of data points. It identifies groups of similar features based on their characteristics, enabling
dimensionality reduction or revealing underlying patterns in the data.
Responses From Readers
Dear Pulakit Sharma, Thank you very much for your article. Is it possible to use R to get the same output. Your reply will be much appreciated.
Hi Thanks for sharing very informative and useful. I have one question once we cluster the data how do find the factor or variable which differentiate the cluster. For eg if age is a factor then all
the cluster should have the different age bins this way we can say that age is a factor which differentiate the clusters. My question how do we find when we have n variables ? Thanks
Really well explained for beginners. As already mentioned in article clustering is very challenging for data analyst and always requires plenty of expertise
Hi Pulkit, I really applaud your efforts to effectively communicate the concepts of Machine Learning with visualisations. I am not a ML practitioner but I am a student and I have recently studied
these subjects recently. I would like to add few points to the above article. Firstly, I think the scaling operation that you have performed has been done on the categorical variables. I dont think
categorical variables need to undergo the scaling and transformation(normalisation). The above normalised data for region and channel do not make any sense. Secondly, the drawbacks of hierarchical
clustering have not been posted here. The drawbacks of Hierarchical clustering is that they do not perform well with large datasets. I hope my inputs are helpful to you. Regards, MD
Amazing post, thanks for sharing.
can you please share the python code to do this clustering . what u have said is theoretical . How to do it in python notebook ???
Please explain how to perform clustering if the number of variables are more than 20.
Awesome! We can clearly visualize the two clusters here, in reality, we have to get the nest cluster out so that we can target this group. None of the articles or training shows that. All end up with
Hi, thanks for this article, I still can't find the code. Can you please share that link for the code here.
How to do this for categorical variables?
Hi! I find it really interesting, I'll try to apply it with documents to cluster them.
Hi, great job. I only have binary values for my variables, what can I do?
Dear Pulkit, Thank you very much for making this easy-to-follow demo! My question is - how do I append the calculated cluster labels to my original 'data' DataFrame? So I know which row (customer)
belongs to each cluster. Thank you
Lets say I want to cut at 2.7, so I have 7 clusters. How will the clusters be numbered? Does that mean that (2,3) will be "closer" and (4,5) also will be closer, also (6,7) and (1) will be closer to
(2,3)? How can I make sure that the numbering correctly corresponds to the hierarchy shown in the dendrogram? Thanks! Michael
Another question: Since the dendrogram linkage already shows the connections between the data - it must be calculating the Euclidean distances... So it is ACTUALLY doing the clustering already, not
just the visualization... So why should I run AgglomerativeClustering at all? Can't I simply somehow output the connections that the dendrogram found - as data, not visual? And why
AgglomerativeClustering result has to be consistent with dendrogram's? Is it the SAME algorithm? It feels to me that we are using two clusterings here... Thanks!
I am working on school project, vessel prediction from AIS data. the data corresponds to an observation of a single maritime vessel at a single point in time(more like a position report). We have to
track the movements of the different vessels given these reports over time. Do you think Hierarchical clustering is the best choice for this problem? How can we find the minimum distance between each
cluster? In your example you decided to take the maximum distance which was the blue line in the dendrogram. I followed your steps on the data I have and I feel like I need a way to figure out the
min distance between the clusters not the max; cause I will always get 2 clusters which is not ideal here!! Thanks!
hi Pulkit, how you choose the x and y variable for plt.scatter?in your example you choose 'milk' as x and 'grocery' as y. in my case i have many variable. when i choose different x and y its give me
different graph..and looks like the clustering not clear see as a group. thanks hazlan
Really good write-up!
How dendograms caluculates distance/linkage between two samples(having more than 2 features like region, fresh, milk, grocery). How does it do ?
I found the article very useful but the calculation part of the distance matrix has been done correctly. The concept was clear and I found the distance matrix has two same values[(1,2), (5,3)] at the
initial. I don't know why you avoided this basic math in the proximity matrix that too at the very beginning of the description.
I really enjoyed your write-up!. I have few questions. I'm working on a fortune500 dataset. 1) How do you know when a problem is a prediction problem or an Interpretation problem? 2) Why did we have
to cut the dendrogram at the threshold of the blue line? Why can't we just use the whole number of cluster? I'm a novice! I will really appreciate your explanations.
I really enjoyed your write-up!. I have few questions. I'm working on a fortune500 dataset. 1) How do you know when a problem is a prediction problem or an Interpretation problem? 2) Why did we have
to cut the dendrogram at the threshold of the blue line? Why can't we just use the whole number of cluster? I'm a novice! I will really appreciate your explanations.
Dear Pulakit Sharma, Thank you very much for your article.I have a technical question, how should I do hierarchical clustering if I use my own obtained distance matrix? Also, how do I locate the time
series I use into the original data after clustering?
Very interesting. Thank you
hello sir how did you label the "milk" and "Grocery" in the last step .
When calculating distance, you are just calculating the square root of a square. Is this the correct method?
|
{"url":"https://www.analyticsvidhya.com/blog/2019/05/beginners-guide-hierarchical-clustering/?utm_source=reading_list&utm_medium=https://www.analyticsvidhya.com/blog/2022/05/ipl-team-win-prediction-project-using-machine-learning/","timestamp":"2024-11-11T14:10:47Z","content_type":"text/html","content_length":"705584","record_id":"<urn:uuid:556c4a7e-d546-40ec-a8b1-437493425c67>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00436.warc.gz"}
|
Time Series Forecasting in R: Modeling Techniques and Evaluation
Time series forecasting with R involves predicting future values based on past observations in a chronological sequence. Here’s a comprehensive guide covering the modeling and evaluation aspects:
1. Understanding Time Series Data:
• Overview: Time series data is sequential, where observations occur over time. Understand the components of time series data: trend, seasonality, and noise.
# Load time series data
my_ts <- ts(data, frequency = 12) # Adjust frequency based on data
2. Exploratory Data Analysis (EDA):
• Overview: Explore patterns, trends, and seasonality in the time series data before modeling.
# Plot time series data
3. Time Series Modeling:
• ARIMA (AutoRegressive Integrated Moving Average): ARIMA is a popular time series model that combines autoregression, differencing, and moving averages.
# Fit ARIMA model
arima_model <- arima(my_ts, order = c(p, d, q))
• Exponential Smoothing State Space Models (ETS): ETS models capture seasonality, trend, and error components in a time series.
# Fit ETS model…
|
{"url":"https://baotramduong.medium.com/time-series-forecasting-with-r-modeling-and-evaluation-73b02d0262db?source=read_next_recirc-----32ea05126c9b----1---------------------9ff036e4_14d8_4f5a_bd57_5eebcb01b75b-------","timestamp":"2024-11-07T07:14:35Z","content_type":"text/html","content_length":"92340","record_id":"<urn:uuid:973ecc65-7ff9-48a2-bbad-8c44af89950b>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00629.warc.gz"}
|
VTechWorks Home
The Relationship between the Attitude toward Mathematics and the Frequency of Classroom Observations of Mathematics Lessons by Elementary School Administrators
The purpose of this study was to explore the relationship between the attitude toward mathematics, including related mathematics anxiety, and the frequency of classroom observations of mathematics
lessons by elementary school administrators. This study considered Approach-Avoidance Motivation as part of the conceptual framework guiding the research. Approach-avoidance motivation refers to a
person's approach of tasks that are pleasant or enjoyable and avoidance of tasks that are disliked or not enjoyable. This research sought to answer the questions:
1. What is the academic background in mathematics of elementary school administrators?
2. What is the attitude toward mathematics of elementary school administrators?
3. What is the frequency of classroom observations of mathematics lessons by elementary school administrators?
4. What, if any, is the relationship between the attitude toward mathematics, including related mathematics anxiety, and the frequency of classroom observations of mathematics lessons by elementary
school administrators?
The participants in this study included elementary school principals and assistant principals in one school division in Virginia. Data were collected to investigate the mathematics background,
attitude toward mathematics, and frequency of classroom observations of mathematics lessons by elementary school administrators. This study also examined the possible relationship between the
attitude toward mathematics, including related mathematics anxiety, and the frequency of classroom observations of mathematics lessons.
The attitude toward mathematics, including related mathematics anxiety, was found to have no relationship with the frequency of both formal and informal classroom observations of mathematics lessons
conducted. The sample population data indicated positive attitudes toward mathematics and low levels of mathematics anxiety, which conflicts with some previous research (Dorward and Hadley, 2011;
Hembree, 1990). The mathematics background of participants was found to be limited in the number of mathematics courses completed and teaching licensure endorsements specific to mathematics
instruction. The findings provide educational leaders with relevant research related to attitude toward mathematics and the instructional leadership practice of observing mathematics classrooms.
Central office and school leaders could benefit from explicit expectations relating to the observation of mathematics lessons in schools.
Mathematics Attitude, Mathematics Anxiety, Elementary Principal Leadership and Mathematics, Principal Observations and Feedback, elementary teachers and mathematics anxiety
|
{"url":"https://vtechworks.lib.vt.edu/items/abf927cf-4313-433e-b909-4e0fa9ab5987","timestamp":"2024-11-02T18:17:25Z","content_type":"text/html","content_length":"479769","record_id":"<urn:uuid:fd446503-ea6b-45dc-b0e2-c41600d37882>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00138.warc.gz"}
|
Printable law of exponents
If you dont have the money to pay a home tutor then the Algebra Professor is what you need and believe me, it does all a tutor would do and probably even more.
Rebecca Cox, WY
This version of your algebra software is absolutely great! Thank you so much! It has helped me tremendously. KEEP UP THE GOOD WORK!
Warren Mills, CA
I bought Algebra Professor last year, and now its helping me with my 9th Grade Algebra class, I really like the step by step solving of equations, it's just GREAT !
Dora Greenwood, PA
Students struggling with all kinds of algebra problems find out that our software is a life-saver. Here are the search phrases that today's searchers used to find our site. Can you find yours among
Search phrases used on 2014-05-17:
• composing square root functions with a rational function
• statistics gcse free worksheets
• Algebra 1 Questions Answers
• transforming formulas calculator
• Math trivia
• +trinomial +a.maths
• "TI-89 Quadratic Formula Program"
• Permutation problems in Aptitude test
• "financial math online"
• Quadratic Equation Calculator
• BASIC ALGEBRA QUESTIONS
• aptitude test sample paper
• "ordered triple" practice problems algebra
• ratio and proportion worksheets for KS2
• solve for perfect square sample
• what is a lesson plan that introduces multiplication
• equation factorer
• online Square root simplifier
• mental math cards grade 4-5 +pdf+free
• Pre Algebra Practice Workbook answers
• 5th grade word math problems
• combination permutation ti 89
• free printable+past simple+exam
• order of operations 7th grade worksheets
• plot gradient vector fields online
• math formula calculator for dividing polynomial
• 8th grade math ratios printable worksheets
• rule for adding subtracting multiplying and dividing integers
• "british method" in pre calculus
• Bars and graphs free high school math printouts
• ti calculator programs trigonometry
• quick answers for intermediate college algebra
• quadratic factoring online
• complex numbers solver
• elementary school mathmatics multipication
• solving equation by subtraction worksheet
• solving simultaneous nonlinear equations in excel
• third grade algebra
• conceptual physics workbook answers
• two-step equation printable worksheet
• practice erb test
• parent algebra help
• "college algebra online help"
• Free Blackline masters of laws of Exponents
• adding subtracting multiplying dividing fractions math problems
• free algerbra help
• ti 84 plus emulator
• LCM, java programming, compute
• example of quadratic word problem with solution
• kumon worksheets
• cost accounting twelfth edition download
• Quadratic Equation Word Problems
• free GRE Math quizzes with answers
• grade 11 exam papers maths
• answers for saxon algebra 1
• online subtracting fractions
• equation and inequalities year 10 undergrad
• free online+algebra 1 solver+similar figures
• problem solving decision 5th grade
• graphing calculator TI-83 online
• solving equations with exponents "worksheet"
• square root worksheet elementary
• beginning algebra worksheets
• first grade subtraction lesson plan
• algabra slover
• year 3 maths western australia printable worksheets 2007
• mixed number to decimal
• bbc simplify expressions maths exponent
• pre-algebra with pizzazz worksheet
• rationalize solver
• Least Common Multiple work sheet
• ordering fractions from least to greatest for dummies
• interval notation calculator
• help with factoring problems
• algebra online calculator Polynomial long division: Linear divisor
• pizzazz algebra worksheet
• prenticehall online tutoring
• compound interest formula + instructions to make calculator program
• simplifying ratios math worksheets
• "ti-83 plus" hex, binary, decimal
• how to multiply fractions with like denominators
• primary math work sheet
• 6th grade free multiplication practice test
• math solution finder inequality
• algebrator softmath version 4 mac
• use of QUADRATIC EQUATION in our daily life
• lesson plans for "3rd grade English" Teachers
• free printable circumference math sheets
• absolute value gr. 9
|
{"url":"https://algebra-net.com/algebra-net/greatest-common-factor/printable-law-of-exponents.html","timestamp":"2024-11-06T08:58:18Z","content_type":"text/html","content_length":"87031","record_id":"<urn:uuid:45124878-7d08-491f-9573-4a5550d8487f>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00159.warc.gz"}
|
On an Embedding Property of Generalized Carter Subgroups
If E and F are saturated formations, we say that E is strongly contained in F if for any solvable group G with E-subgroup, E, and F-subgroup, F, some conjugate of E is contained in F. In this paper,
we investigate the problem of finding the formations which strongly contain a fixed saturated formation E.
Our main results are restricted to formations, E, such that E = {G|G/F(G) ϵT}, where T is a non-empty formation of solvable groups, and F(G) is the Fitting subgroup of G. If T consists only of the
identity, then E=N, the class of nilpotent groups, and for any solvable group, G, the N-subgroups of G are the Carter subgroups of G.
We give a characterization of strong containment which depends only on the formations E, and F. From this characterization, we prove:
If T is a non-empty formation of solvable groups, E = {G|G/F(G) ϵT}, and E is strongly contained in F, then
(1) there is a formation V such that F = {G|G/F(G) ϵV}.
(2) If for each prime p, we assume that T does not contain the class, S[p’], of all solvable p’-groups, then either E = F, or F contains all solvable groups.
This solves the problem for the Carter subgroups.
We prove the following result to show that the hypothesis of (2) is not redundant:
If R = {G|G/F(G) ϵS[r’]}, then there are infinitely many formations which strongly contain R.
Item Type: Thesis (Dissertation (Ph.D.))
Subject Keywords: (Mathematics)
Degree Grantor: California Institute of Technology
Division: Physics, Mathematics and Astronomy
Major Option: Mathematics
Thesis Availability: Public (worldwide access)
Research Advisor(s): • Hall, Marshall
Thesis Committee: • Unknown, Unknown
Defense Date: 1 January 1966
Funders: ┌──────────────────┬───────────────┐
│ Funding Agency │ Grant Number │
│ Caltech │ UNSPECIFIED │
│ NSF │ UNSPECIFIED │
│ ARCS Foundation │ UNSPECIFIED │
Record Number: CaltechTHESIS:09222015-083434916
Persistent URL: https://resolver.caltech.edu/CaltechTHESIS:09222015-083434916
DOI: 10.7907/S8GH-XA56
Default Usage Policy: No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code: 9163
Collection: CaltechTHESIS
Deposited By: INVALID USER
Deposited On: 22 Sep 2015 19:48
Last Modified: 27 Feb 2024 22:05
Thesis Files
PDF - Final Version
See Usage Policy.
Repository Staff Only: item control page
|
{"url":"https://thesis.library.caltech.edu/9163/","timestamp":"2024-11-03T23:38:33Z","content_type":"application/xhtml+xml","content_length":"28056","record_id":"<urn:uuid:006ced32-01a3-483a-95cf-32242b82b904>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00786.warc.gz"}
|
On the flexibility of block coordinate descent for large-scale optimization
We consider a large-scale minimization problem (not necessarily convex) with non-smooth separable convex penalty. Problems in this form widely arise in many modern large-scale machine learning and
signal processing applications. In this paper, we present a new perspective towards the parallel Block Coordinate Descent (BCD) methods. Specifically we explicitly give a concept of so-called
two-layered block variable updating loop for parallel BCD methods in modern computing environment comprised of multiple distributed computing nodes. The outer loop refers to the block variable
updating assigned to distributed nodes, and the inner loop involves the updating step inside each node. Each loop allows to adopt either Jacobi or Gauss–Seidel update rule. In particular, we give
detailed theoretical convergence analysis to two practical schemes: Jacobi/Gauss–Seidel and Gauss–Seidel/Jacobi that embodies two algorithms respectively. Our new perspective and behind theoretical
results help devise parallel BCD algorithms in a principled fashion, which in turn lend them a flexible implementation for BCD methods suited to the parallel computing environment. The effectiveness
of the algorithm framework is verified on the benchmark tasks of large-scale ℓ[1] regularized sparse logistic regression and non-negative matrix factorization.
Scopus Subject Areas
• Computer Science Applications
• Cognitive Neuroscience
• Artificial Intelligence
User-Defined Keywords
• Block coordinate descent
• Gauss–Seidel
• Jacobi
• Large-scale optimization
Dive into the research topics of 'On the flexibility of block coordinate descent for large-scale optimization'. Together they form a unique fingerprint.
|
{"url":"https://scholars.hkbu.edu.hk/en/publications/on-the-flexibility-of-block-coordinate-descent-for-large-scale-op","timestamp":"2024-11-09T11:02:31Z","content_type":"text/html","content_length":"58030","record_id":"<urn:uuid:cb97b5b2-efe3-4d46-b563-e845685d8ba2>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00563.warc.gz"}
|
American Mathematical Society
A truncated Gauss-Kuz′min law
HTML articles powered by AMS MathViewer
by Doug Hensley
Trans. Amer. Math. Soc. 306 (1988), 307-327
DOI: https://doi.org/10.1090/S0002-9947-1988-0927693-3
PDF | Request permission
The transformations ${T_n}$ which map $x \in [0, 1)$ onto $0$ (if $x \leqslant 1/(n + 1)$), and to $\{ 1/x\}$ otherwise, are truncated versions of the continued fraction transformation $T:x \to \{ 1/
x\}$ (but $0 \to 0$). An analog to the Gauss-Kuzmin result is obtained for these ${T_n}$, and is used to show that the Lebesgue measure of $T_n^{ - k}\{ 0\}$ approaches $1$ exponentially. From this
fact is obtained a new proof that the ratios $\nu /k$, where $\nu$ denotes any solution of ${\nu ^2} \equiv - 1\bmod k$, are uniformly distributed $\bmod 1$ in the sense of Weyl. References
E. Landau, Vorlesungen über Zahlentheorie, Chelsea, 1969.
Similar Articles
• Retrieve articles in Transactions of the American Mathematical Society with MSC: 11K36, 11A55, 11H41
• Retrieve articles in all journals with MSC: 11K36, 11A55, 11H41
Bibliographic Information
• © Copyright 1988 American Mathematical Society
• Journal: Trans. Amer. Math. Soc. 306 (1988), 307-327
• MSC: Primary 11K36; Secondary 11A55, 11H41
• DOI: https://doi.org/10.1090/S0002-9947-1988-0927693-3
• MathSciNet review: 927693
|
{"url":"https://www.ams.org/journals/tran/1988-306-01/S0002-9947-1988-0927693-3/?active=current","timestamp":"2024-11-13T12:10:51Z","content_type":"text/html","content_length":"59225","record_id":"<urn:uuid:eed7034b-a41f-4eba-9d07-796c0888b82b>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00828.warc.gz"}
|
Photo by Štefan Štefančík on Unsplash
In computer science, a search algorithm is any algorithm which solves the search problem, namely, to retrieve information stored within some data structure, or calculated in the search space of a
problem domain, either with discrete or continuous values.
From the apps on your phone to the sensors in your wearables and how posts appear in your Facebook News Feed every site has it’s different working algorithm. One will be pushed to find a service that
isn’t powered by some form of algorithm.
Search algorithms form an important part of many programs. Searching algorithms is a basic, fundamental step in computing done via step-by-step method to locate a specific data among a collection of
data. We can choose from numerous search types, and select the algorithm that best matches the size and structure of the database to provide a user-friendly experience.
Let us now dive into different types of searching algorithms:-
➢Searching is useful to find any record stored in the file.
➢Searching books in library
For that we have two searching algorithms:
➢Linear Search/Sequential Search
➢Binary Search
How does it work ?
Linear search is the basic search algorithm used in data structures. It is also called as sequential search. Linear search is used to find a particular element in an array. It is not compulsory to
arrange an array in any order (Ascending or Descending) as in the case of binary search.
Input: Array A of n elements
Output: Return index/position of the element X in the array A
Two Cases:
Either element is present : Linear Search Gives the Index
Or elements not present: Elements not found
using namespace std;
int Linear_search(int A[], int n, int x)
int i;
for (i = 0; i < n; i++)
if (A[i] == x)
return i;
return -1;
int main(void)
int A[] = { 2, 3, 4, 10, 40 };
int x = 10;
int n = sizeof(A) / sizeof(A[0]);
int result = Linear_search(A, n, x);
if (result == -1)
cout<<”Element is not present in array”
cout<<”Element is present at index “ <<result;
return 0;
Time and Space Complexity of Linear Search
• Best case what is the minimum number of comparisons that can be done for n items =
comparisons: 1
• Occurs when x is the first element examined
• Worst case what is the maximum number of comparisons for n items= comparisons: n
• Case 1: x is the last element examine
• Case 2: x is not in the list
• Average case on average, how many comparisons do you expect the algorithm to do
• This is bit tougher because it depends on
• The order of the elements list
• The probability that x is in the list at all
This type of searching algorithms is used to find the position of a specific value contained in a sorted array. Binary search algorithm works on the principle of divide & conquer and it is considered
the best searching algorithms because of its faster speed to search ( Provided the data is in sorted form).
How does it work ?
In its simplest form, Binary Search operates on a contiguous sequence with a specified left and right index. This is called the Search Space. Binary Search maintains the left, right, and middle
indices of the search space and compares the search target or applies the search condition to the middle value of the collection; if the condition is unsatisfied or values unequal, the half in which
the target cannot lie is eliminated and the search continues on the remaining half until it is successful. If the search ends with an empty half, the condition cannot be fulfilled and target is not
#include <iostream.h>
using namespace std;
int binary_search(int A[], int l, int r, int x)
while (l <= r) {
int mid = (l + r)/ 2;
if (A[mid] == x)
return mid;
if (A[mid] < x) // If x greater, ignore left half
l = mid + 1;
else // If x is smaller, ignore right half
r = mid — 1;
return -1; // if we reach here, then element was not present
int main(void)
int A[ ]={5, 12, 23, 42, 64, 78};
int x = 35;
int n =sizeof(A) / sizeof(A[0]);
int result = binarySearch(A, 0, n — 1, x);
if(result == -1)
cout << “Element not found in the array“
cout << “Element found at “ << result;
return 0;
The time complexity of the binary search algorithm is O(log n). The best-case time complexity would be O(1) when the central index would directly match the desired value. The worst-case scenario
could be the values at either extremity of the list or values not in the list.
The median-of-medians algorithm is a deterministic linear-time selection algorithm.
HOW DOES IT WORKS?
The algorithm works by dividing a list into sublists and then determines the approximate median in each of the sublists. Then, it takes those medians and puts them into a list and finds the median of
that list. It uses that median value as a pivot and compares other elements of the list against the pivot. If an element is less than the pivot value, the element is placed to the left of the pivot,
and if the element has a value greater than the pivot, it is placed to the right. The algorithm recurses on the list, honing in on the value it is looking for.
Example: 7 10 4 3 20 15 8 12 6
randomly choose an element a = 10
Now divide the array into three part S1, S2, S3
S1= {7,4,3,8,6}
Now length of (S1)>=k=5>4 first condition true select(4,S1)
Check in the array S1={7,4,3,8,6}
Randomly choose element a=3
Now divide the array S1 into three parts S11,S12,S13
Here first and second condition fail third condition true select(3,S13)
Find the k=4th median
Here k=3
S13= {7,4,8,6}
Randomly choose a=7
Now divide S13 into three parts such as: S131,S132,S133
Here condition 1 fail
Condition 2 true
Length(S1)+length(S2)>=K 3==3
Return median a =7
The median-of-medians algorithm runs in O(n) time
|
{"url":"https://duggal-shweta2010.medium.com/different-searching-algorithms-5ed9ee91912?source=author_recirc-----d523952032b5----1---------------------3e4a6f13_0cdd_420d_9505_33d915498ae6-------","timestamp":"2024-11-06T13:48:00Z","content_type":"text/html","content_length":"154382","record_id":"<urn:uuid:c7f46efa-d7ad-4782-8e83-55e7e7360fd3>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00056.warc.gz"}
|
Newtonian and Relativistic Cosmologies
Newtonian and Relativistic Cosmologies
Green, S. (2012). Newtonian and Relativistic Cosmologies. Perimeter Institute. https://pirsa.org/12020128
Green, Stephen. Newtonian and Relativistic Cosmologies. Perimeter Institute, Feb. 09, 2012, https://pirsa.org/12020128
@misc{ pirsa_PIRSA:12020128,
doi = {10.48660/12020128},
url = {https://pirsa.org/12020128},
author = {Green, Stephen},
keywords = {Strong Gravity},
language = {en},
title = {Newtonian and Relativistic Cosmologies},
publisher = {Perimeter Institute},
year = {2012},
month = {feb},
note = {PIRSA:12020128 see, \url{https://pirsa.org}}
Talk number
Cosmological N-body simulations are now being performed using Newtonian gravity on scales larger than the Hubble radius. It is well known that a uniformly expanding, homogeneous ball of dust in
Newtonian gravity satisfies the same equations as arise in relativistic FLRW cosmology, and it also is known that a correspondence between Newtonian and relativistic dust cosmologies continues to
hold in linearized perturbation theory in the marginally bound/spatially flat case. Nevertheless, it is far from obvious that Newtonian gravity can provide a good global description of an
inhomogeneous cosmology when there is significant nonlinear dynamical behavior at small scales. We investigate this issue in the light of a perturbative framework that we have recently developed,
which allows for such nonlinearity at small scales. We propose a relatively straightforward "dictionary"---which is exact at the linearized level---that maps Newtonian dust cosmologies into general
relativistic dust cosmologies, and we use our "ordering scheme" to determine the degree to which the resulting metric and matter distribution solve Einstein's equation. We find that Einstein's
equation fails to hold at "order 1" at small scales and at "order $\epsilon$" at large scales. We then find the additional corrections to the metric and matter distribution needed to satisfy
Einstein's equation to these orders. While these corrections are of some interest in their own right, our main purpose in calculating them is that their smallness should provide a criterion for the
validity of the original dictionary (as well as simplified versions of this dictionary). We expect that, in realistic Newtonian cosmologies, these additional corrections will be very small; if so,
this should provide strong justification for the use of Newtonian simulations to describe relativistic cosmologies, even on scales larger than the Hubble radius.
|
{"url":"https://pirsa.org/12020128","timestamp":"2024-11-14T17:12:55Z","content_type":"text/html","content_length":"50358","record_id":"<urn:uuid:24588d36-a357-47e0-85e3-73d26f74d9d1>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00844.warc.gz"}
|
Introduction: Distribution of Sample Means
What you’ll learn to do: Describe the sampling distribution of sample means.
• Recognize when to use a hypothesis test or a confidence interval to draw a conclusion about a population mean.
• Describe the sampling distribution of sample means.
• Draw conclusions about a population mean from a simulation.
|
{"url":"https://courses.lumenlearning.com/atd-herkimer-statisticssocsci/chapter/introduction-distribution-of-sample-means/","timestamp":"2024-11-08T05:39:25Z","content_type":"text/html","content_length":"46603","record_id":"<urn:uuid:e50f6358-9f96-46f7-ae10-7fce4ab432d1>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00189.warc.gz"}
|
NAG Library
NAG Library Routine Document
1 Purpose
F11DCF solves a real sparse nonsymmetric system of linear equations, represented in coordinate storage format, using a restarted generalized minimal residual (RGMRES), conjugate gradient squared
(CGS), stabilized bi-conjugate gradient (Bi-CGSTAB), or transpose-free quasi-minimal residual (TFQMR) method, with incomplete $LU$ preconditioning.
2 Specification
SUBROUTINE F11DCF ( METHOD, N, NNZ, A, LA, IROW, ICOL, IPIVP, IPIVQ, ISTR, IDIAG, B, M, TOL, MAXITN, X, RNORM, ITN, WORK, LWORK, IFAIL)
INTEGER N, NNZ, LA, IROW(LA), ICOL(LA), IPIVP(N), IPIVQ(N), ISTR(N+1), IDIAG(N), M, MAXITN, ITN, LWORK, IFAIL
REAL (KIND=nag_wp) A(LA), B(N), TOL, X(N), RNORM, WORK(LWORK)
CHARACTER(*) METHOD
3 Description
F11DCF solves a real sparse nonsymmetric linear system of equations:
using a preconditioned RGMRES (see
Saad and Schultz (1986)
), CGS (see
Sonneveld (1989)
), Bi-CGSTAB(
) (see
Van der Vorst (1989)
Sleijpen and Fokkema (1993)
), or TFQMR (see
Freund and Nachtigal (1991)
Freund (1993)
) method.
F11DCF uses the incomplete
factorization determined by
as the preconditioning matrix. A call to F11DCF must always be preceded by a call to
. Alternative preconditioners for the same storage scheme are available by calling
The matrix
, and the preconditioning matrix
, are represented in coordinate storage (CS) format (see
Section 2.1.1
in the F11 Chapter Introduction) in the arrays
, as returned from
. The array
holds the nonzero entries in these matrices, while
hold the corresponding row and column indices.
F11DCF is a Black Box routine which calls
. If you wish to use an alternative storage scheme, preconditioner, or termination criterion, or require additional diagnostic information, you should call these underlying routines directly.
4 References
Freund R W (1993) A transpose-free quasi-minimal residual algorithm for non-Hermitian linear systems SIAM J. Sci. Comput. 14 470–482
Freund R W and Nachtigal N (1991) QMR: a Quasi-Minimal Residual Method for Non-Hermitian Linear Systems Numer. Math. 60 315–339
Saad Y and Schultz M (1986) GMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems SIAM J. Sci. Statist. Comput. 7 856–869
Salvini S A and Shaw G J (1996) An evaluation of new NAG Library solvers for large sparse unsymmetric linear systems NAG Technical Report TR2/96
Sleijpen G L G and Fokkema D R (1993) BiCGSTAB$\left(\ell \right)$ for linear equations involving matrices with complex spectrum ETNA 1 11–32
Sonneveld P (1989) CGS, a fast Lanczos-type solver for nonsymmetric linear systems SIAM J. Sci. Statist. Comput. 10 36–52
Van der Vorst H (1989) Bi-CGSTAB, a fast and smoothly converging variant of Bi-CG for the solution of nonsymmetric linear systems SIAM J. Sci. Statist. Comput. 13 631–644
5 Parameters
1: METHOD – CHARACTER(*)Input
On entry
: specifies the iterative method to be used.
Restarted generalized minimum residual method.
Conjugate gradient squared method.
Bi-conjugate gradient stabilized ($\ell$) method.
Transpose-free quasi-minimal residual method.
Constraint: ${\mathbf{METHOD}}=\text{"RGMRES'}$, $\text{"CGS'}$, $\text{"BICGSTAB'}$ or $\text{"TFQMR'}$.
2: N – INTEGERInput
On entry
, the order of the matrix
. This
be the same value as was supplied in the preceding call to
Constraint: ${\mathbf{N}}\ge 1$.
3: NNZ – INTEGERInput
On entry
: the number of nonzero elements in the matrix
. This
be the same value as was supplied in the preceding call to
Constraint: $1\le {\mathbf{NNZ}}\le {{\mathbf{N}}}^{2}$.
4: A(LA) – REAL (KIND=nag_wp) arrayInput
On entry
: the values returned in the array
by a previous call to
5: LA – INTEGERInput
On entry
: the dimension of the arrays
as declared in the (sub)program from which F11DCF is called. This
be the same value as was supplied in the preceding call to
Constraint: ${\mathbf{LA}}\ge 2×{\mathbf{NNZ}}$.
6: IROW(LA) – INTEGER arrayInput
7: ICOL(LA) – INTEGER arrayInput
8: IPIVP(N) – INTEGER arrayInput
9: IPIVQ(N) – INTEGER arrayInput
10: ISTR(${\mathbf{N}}+1$) – INTEGER arrayInput
11: IDIAG(N) – INTEGER arrayInput
On entry
: the values returned in arrays
by a previous call to
are restored on exit.
12: B(N) – REAL (KIND=nag_wp) arrayInput
On entry: the right-hand side vector $b$.
13: M – INTEGERInput
On entry
: if
is the dimension of the restart subspace.
is the order
of the polynomial Bi-CGSTAB method; otherwise,
is not referenced.
□ if ${\mathbf{METHOD}}=\text{"RGMRES'}$, $0<{\mathbf{M}}\le \mathrm{min}\phantom{\rule{0.125em}{0ex}}\left({\mathbf{N}},50\right)$;
□ if ${\mathbf{METHOD}}=\text{"BICGSTAB'}$, $0<{\mathbf{M}}\le \mathrm{min}\phantom{\rule{0.125em}{0ex}}\left({\mathbf{N}},10\right)$.
14: TOL – REAL (KIND=nag_wp)Input
On entry
: the required tolerance. Let
denote the approximate solution at iteration
, and
the corresponding residual. The algorithm is considered to have converged at iteration
${\mathbf{TOL}}\le 0.0$
$\tau =\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(\sqrt{\epsilon },\sqrt{n}\epsilon \right)$
is used, where
is the
machine precision
. Otherwise
$\tau =\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left({\mathbf{TOL}},10\epsilon ,\sqrt{n}\epsilon \right)$
is used.
Constraint: ${\mathbf{TOL}}<1.0$.
15: MAXITN – INTEGERInput
On entry: the maximum number of iterations allowed.
Constraint: ${\mathbf{MAXITN}}\ge 1$.
16: X(N) – REAL (KIND=nag_wp) arrayInput/Output
On entry: an initial approximation to the solution vector $x$.
On exit: an improved approximation to the solution vector $x$.
17: RNORM – REAL (KIND=nag_wp)Output
On exit
: the final value of the residual norm
${‖{r}_{k}‖}_{\infty }$
, where
is the output value of
18: ITN – INTEGEROutput
On exit: the number of iterations carried out.
19: WORK(LWORK) – REAL (KIND=nag_wp) arrayWorkspace
20: LWORK – INTEGERInput
On entry
: the dimension of the array
as declared in the (sub)program from which F11DCF is called.
□ if ${\mathbf{METHOD}}=\text{"RGMRES'}$, ${\mathbf{LWORK}}\ge 4×{\mathbf{N}}+{\mathbf{M}}×\left({\mathbf{M}}+{\mathbf{N}}+5\right)+101$;
□ if ${\mathbf{METHOD}}=\text{"CGS'}$, ${\mathbf{LWORK}}\ge 8×{\mathbf{N}}+100$;
□ if ${\mathbf{METHOD}}=\text{"BICGSTAB'}$, ${\mathbf{LWORK}}\ge 2×{\mathbf{N}}×\left({\mathbf{M}}+3\right)+{\mathbf{M}}×\left({\mathbf{M}}+2\right)+100$;
□ if ${\mathbf{METHOD}}=\text{"TFQMR'}$, ${\mathbf{LWORK}}\ge 11×{\mathbf{N}}+100$.
21: IFAIL – INTEGERInput/Output
On entry
must be set to
$-1\text{ or }1$
. If you are unfamiliar with this parameter you should refer to
Section 3.3
in the Essential Introduction for details.
For environments where it might be inappropriate to halt program execution when an error is detected, the value
$-1\text{ or }1$
is recommended. If the output of error messages is undesirable, then the value
is recommended. Otherwise, if you are not familiar with this parameter, the recommended value is
When the value $-\mathbf{1}\text{ or }\mathbf{1}$ is used it is essential to test the value of IFAIL on exit.
On exit
unless the routine detects an error or a warning has been flagged (see
Section 6
6 Error Indicators and Warnings
If on entry
, explanatory error messages are output on the current error message unit (as defined by
Errors or warnings detected by the routine:
On entry, ${\mathbf{METHOD}}e \text{"RGMRES'},\text{'CGS'},\text{'BICGSTAB'}$, or 'TFQMR',
or ${\mathbf{N}}<1$,
or ${\mathbf{NNZ}}<1$,
or ${\mathbf{NNZ}}>{{\mathbf{N}}}^{2}$,
or ${\mathbf{LA}}<2×{\mathbf{NNZ}}$,
or ${\mathbf{M}}<1$ and ${\mathbf{METHOD}}=\text{"RGMRES'}$ or ${\mathbf{METHOD}}=\text{"BICGSTAB'}$,
or ${\mathbf{M}}>\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left({\mathbf{N}},50\right)$, with ${\mathbf{METHOD}}=\text{"RGMRES'}$,
or ${\mathbf{M}}>\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left({\mathbf{N}},10\right)$, with ${\mathbf{METHOD}}=\text{"BICGSTAB'}$,
or ${\mathbf{TOL}}\ge 1.0$,
or ${\mathbf{MAXITN}}<1$,
or LWORK too small.
On entry, the CS representation of
is invalid. Further details are given in the error message. Check that the call to F11DCF has been preceded by a valid call to
, and that the arrays
, and
have not been corrupted between the two calls.
On entry, the CS representation of the preconditioning matrix
is invalid. Further details are given in the error message. Check that the call to F11DCF has been preceded by a valid call to
and that the arrays
have not been corrupted between the two calls.
The required accuracy could not be obtained. However, a reasonable accuracy may have been obtained, and further iterations could not improve the result. You should check the output value of
for acceptability. This error code usually implies that your problem has been fully and satisfactorily solved to within or close to the accuracy available on your system. Further iterations are
unlikely to improve on this situation.
Required accuracy not obtained in
Algorithmic breakdown. A solution is returned, although it is possible that it is completely inaccurate.
${\mathbf{IFAIL}}=7$ (F11BDF, F11BEF or F11BFF)
A serious error has occurred in an internal call to one of the specified routines. Check all subroutine calls and array sizes. Seek expert help.
7 Accuracy
On successful termination, the final residual
, where
, satisfies the termination criterion
The value of the final residual norm is returned in
The time taken by F11DCF for each iteration is roughly proportional to the value of
returned from the preceding call to
The number of iterations required to achieve a prescribed accuracy cannot be easily determined a priori, as it can depend dramatically on the conditioning and spectrum of the preconditioned
coefficient matrix $\stackrel{-}{A}={{\mathbf{M}}}^{-1}A$.
Some illustrations of the application of F11DCF to linear systems arising from the discretization of two-dimensional elliptic partial differential equations, and to random-valued randomly structured
linear systems, can be found in
Salvini and Shaw (1996)
9 Example
This example solves a sparse linear system of equations using the CGS method, with incomplete $LU$ preconditioning.
9.1 Program Text
9.2 Program Data
9.3 Program Results
|
{"url":"https://support.nag.com/numeric/fl/nagdoc_fl24/html/f11/f11dcf.html","timestamp":"2024-11-06T15:44:43Z","content_type":"text/html","content_length":"35699","record_id":"<urn:uuid:73c9f894-c9e7-40af-b196-037a83ca9bc8>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00799.warc.gz"}
|
College of Science and Mathematics
Department of Mathematics
Placement Exams
ALEKS PPL placement is required for enrollment in MATH 75, MATH 75A, or MATH 70. Alternatively, passing Math 6 with an A or B is sufficient to enroll in Math 75, passing Math 6 with a C is sufficient
to enroll in Math 75A or 70, and passing Math 3 or Math 3L with a C is sufficient to enroll in Math 70. Students who received A or B in high school calculus class may be allowed to enroll in Math 75
without taking the ALEKS Calculus placement test.
If you passed the AP Calculus AB test with a score of 3 or higher and submitted the official scores, you already have credit for MATH 75 and may enroll directly into MATH 76. If you passed the AP
Calculus BC test with a score of 3 or higher, you have credit for MATH 75 and MATH 76 and may directly enroll in MATH 77.
|
{"url":"https://csm.fresnostate.edu/math/students/placement-exams/index.html","timestamp":"2024-11-10T09:20:24Z","content_type":"text/html","content_length":"27863","record_id":"<urn:uuid:f101dcd3-f771-416b-9f0f-a43a95ae8134>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00432.warc.gz"}
|
Counterexamples in Analysis
These counterexamples deal mostly with the part of analysis known as real variables. The 1st half of the book discusses the real number system, functions and ...
Counterexamples in analysis / Bernard R. Gelbaum, John M.H. Olmsted. p. cm. . an unabridged, slightly corrected republication of the 1965 second.
These counterexamples, arranged according to difficulty or sophistication, deal mostly with the part of analysis known as "real variables," starting at the level of calculus. ...
Google Books
Originally published: 1964
$7.95 6–7 day delivery In stock
Counterexamples in Analysis. By Bernard R. Gelbaum and John M. H. Olmsted. $20.00. Publication Date: 4th June 2003.
These counterexamples deal mostly with the part of analysis known as "real variables." The 1st half of the book discusses the real number system, functions and ...
Title, Counterexamples in Analysis Dover Books on Mathematics ; Authors, Bernard R. Gelbaum, John M. H. Olmsted ; Edition, illustrated, unabridged, reprint.
These counterexamples, arranged according to difficulty or sophistication, deal mostly with the part of analysis known as real variables, starting at the level ...
Nov 4, 2014 · Counterexamples in analysis · Share or Embed This Item · Flag this item for · Counterexamples in analysis · DOWNLOAD OPTIONS · IN COLLECTIONS.
What are counterexamples reasoning?
What are counterexamples in geometry?
How do you think of counterexamples?
What are the roles of counterexamples in the teaching of mathematics?
(91) · $4.50 4–14 day delivery
Counterexamples in Analysis (Dover Books on Mathematics). Bernard R. Gelbaum; John M. H. Olmsted. Published by Dover Publications (edition unknown), 2003.
1. The Real Number System. 2. Functions and Limits. 3. Differentiation. 4. Riemann Integration. 5. Sequences. 6. Infinite Series. 7. Uniform Convergence. 8.
These counterexamples deal mostly with the part of analysis known as "real variables." Covers the real number system, functions and limits, differentiation, ...
|
{"url":"http://www.google.nl/search?q=Counterexamples%20in%20Analysis%20Gelbaum%20&%20Olmsted","timestamp":"2024-11-03T16:27:53Z","content_type":"text/html","content_length":"85898","record_id":"<urn:uuid:d6889648-b4e3-4afb-8897-a1ffa8ecd003>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00887.warc.gz"}
|
Mathomatic is a free, portable, general-purpose CAS (Computer Algebra System)
and calculator software that can symbolically solve, simplify, combine, and
compare equations, perform complex number and polynomial arithmetic, etc. It
does some calculus and is very easy to use.
George Gesslein II
ficially approved by the authors
Aims and scope
Mathematical Classification
Available via
Operating Systems
Programming Languages
Screenshot 1
Mathomatic running in a terminal emulator
|
{"url":"https://orms.mfo.de/project@terms=roots+of+polynomials&id=312.html","timestamp":"2024-11-07T22:00:24Z","content_type":"application/xhtml+xml","content_length":"6893","record_id":"<urn:uuid:f14c0dd4-d53b-4482-9a63-697950356bc6>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00024.warc.gz"}
|
Re: Check numeric values in alphanumeric variables
Hi to all,
in T-SQL there is a simple function (called isnumeric()) which checks if the value inside a column is numeric or not.
I couldn't find anything similar in SAS functions (to be used in datasteps/proc sql)...the first (and at the moment, working) solution I've found is to check with the value returned from an input:
proc sql;
select *
from work.test a
where input(a.charValue,best.) = .;
I wanted to know...is there a better way? Is the method I've adopted correct?
Daniele Message was edited by: Daniele Tiles
04-15-2009 12:02 PM
|
{"url":"https://communities.sas.com/t5/SAS-Procedures/Check-numeric-values-in-alphanumeric-variables/m-p/19250","timestamp":"2024-11-02T07:57:28Z","content_type":"text/html","content_length":"291027","record_id":"<urn:uuid:ad8a6e2a-c009-4752-9caf-8ba28ee77454>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00657.warc.gz"}
|
How do you solve the system by graphing 4x - 2y = -12 and 2x + y = -2?
| HIX Tutor
How do you solve the system by graphing #4x - 2y = -12# and #2x + y = -2#?
Answer 1
To solve #x# and #y# in #2# or more functions, you are actually finding #x# and #y# values which satisfy all the functions.
In other words, you are finding the intersections of the functions since at intersections, the functions are having the same value for #x# and #y#
In this case, I have plotted the functions on the same plane.
The intersection point (in this case only #1#) has the coordinate (-2,2)
#:.# #x=-2#, #y=2#
graph{(4x-2y+12)(2x+y+2)=0 [-7.9, 7.9, -3.94, 3.96]}
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
To solve the system by graphing, first graph each equation on the same coordinate plane. Then, find the point where the two lines intersect, which represents the solution to the system. In this case,
the solution is the point where the graphs of the lines 4x - 2y = -12 and 2x + y = -2 intersect.
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7
|
{"url":"https://tutor.hix.ai/question/how-do-you-solve-the-system-by-graphing-4x-2y-12-and-2x-y-2-8f9af945ae","timestamp":"2024-11-13T06:03:35Z","content_type":"text/html","content_length":"578413","record_id":"<urn:uuid:86995a24-9d81-46e7-b862-0f108ecec8bc>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00068.warc.gz"}
|
Python - Please write the move (n, a, b, c) function, which takes the parameter n, representing the number of plates in the first column A of three columns A, B, and C. Then print out the method to move all plates from A to C with the help of B
The movement of the Tower of Hanover can be easily achieved using recursive functions.
Please write the move(n, a, b, c) function, which takes the parameter n to represent the number of plates in the first column A of three columns A, B, and C. Then print out the method to move all
plates from A to C using B, for example:
# -*- coding: utf-8 -*-
def move(n, a, b, c):
if n == 1:
print(a, '-->', c)
move(n-1, a, c, b)
move(1, a, b, c)
move(n-1, b, a, c)
|
{"url":"http://iopenv.com/VE5Q7UF87/Python-Please-write-move-n-a-b-c-function-which-takes-parameter-n-representing-number-plates-in-first-column-three-columns-and-Then-print-out-method-move-all-plates","timestamp":"2024-11-14T00:20:02Z","content_type":"text/html","content_length":"14454","record_id":"<urn:uuid:ad17b32d-8705-4d6e-a769-5b6750a32bce>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00770.warc.gz"}
|
Fibonacci Numbers
Fibonacci Numbers¶
The Fibonacci sequence is defined as follows:
$$F_0 = 0, F_1 = 1, F_n = F_{n-1} + F_{n-2}$$
The first elements of the sequence (OEIS A000045) are:
$$0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ...$$
Fibonacci numbers possess a lot of interesting properties. Here are a few of them:
$$F_{n-1} F_{n+1} - F_n^2 = (-1)^n$$
$$F_{n+k} = F_k F_{n+1} + F_{k-1} F_n$$
• Applying the previous identity to the case $k = n$, we get:
$$F_{2n} = F_n (F_{n+1} + F_{n-1})$$
• From this we can prove by induction that for any positive integer $k$, $F_{nk}$ is multiple of $F_n$.
• The inverse is also true: if $F_m$ is multiple of $F_n$, then $m$ is multiple of $n$.
• GCD identity:
$$GCD(F_m, F_n) = F_{GCD(m, n)}$$
• Fibonacci numbers are the worst possible inputs for Euclidean algorithm (see Lame's theorem in Euclidean algorithm)
Fibonacci Coding¶
We can use the sequence to encode positive integers into binary code words. According to Zeckendorf's theorem, any natural number $n$ can be uniquely represented as a sum of Fibonacci numbers:
$$N = F_{k_1} + F_{k_2} + \ldots + F_{k_r}$$
such that $k_1 \ge k_2 + 2,\ k_2 \ge k_3 + 2,\ \ldots,\ k_r \ge 2$ (i.e.: the representation cannot use two consecutive Fibonacci numbers).
It follows that any number can be uniquely encoded in the Fibonacci coding. And we can describe this representation with binary codes $d_0 d_1 d_2 \dots d_s 1$, where $d_i$ is $1$ if $F_{i+2}$ is
used in the representation. The code will be appended by a $1$ to indicate the end of the code word. Notice that this is the only occurrence where two consecutive 1-bits appear.
$$\begin{eqnarray} 1 &=& 1 &=& F_2 &=& (11)_F \\ 2 &=& 2 &=& F_3 &=& (011)_F \\ 6 &=& 5 + 1 &=& F_5 + F_2 &=& (10011)_F \\ 8 &=& 8 &=& F_6 &=& (000011)_F \\ 9 &=& 8 + 1 &=& F_6 + F_2 &=& (100011)_F \
\ 19 &=& 13 + 5 + 1 &=& F_7 + F_5 + F_2 &=& (1001011)_F \end{eqnarray}$$
The encoding of an integer $n$ can be done with a simple greedy algorithm:
1. Iterate through the Fibonacci numbers from the largest to the smallest until you find one less than or equal to $n$.
2. Suppose this number was $F_i$. Subtract $F_i$ from $n$ and put a $1$ in the $i-2$ position of the code word (indexing from 0 from the leftmost to the rightmost bit).
3. Repeat until there is no remainder.
4. Add a final $1$ to the codeword to indicate its end.
To decode a code word, first remove the final $1$. Then, if the $i$-th bit is set (indexing from 0 from the leftmost to the rightmost bit), sum $F_{i+2}$ to the number.
Formulas for the $n^{\text{th}}$ Fibonacci number¶
Closed-form expression¶
There is a formula known as "Binet's formula", even though it was already known by Moivre:
$$F_n = \frac{\left(\frac{1 + \sqrt{5}}{2}\right)^n - \left(\frac{1 - \sqrt{5}}{2}\right)^n}{\sqrt{5}}$$
This formula is easy to prove by induction, but it can be deduced with the help of the concept of generating functions or by solving a functional equation.
You can immediately notice that the second term's absolute value is always less than $1$, and it also decreases very rapidly (exponentially). Hence the value of the first term alone is "almost" $F_n$
. This can be written strictly as:
$$F_n = \left[\frac{\left(\frac{1 + \sqrt{5}}{2}\right)^n}{\sqrt{5}}\right]$$
where the square brackets denote rounding to the nearest integer.
As these two formulas would require very high accuracy when working with fractional numbers, they are of little use in practical calculations.
Fibonacci in linear time¶
The $n$-th Fibonacci number can be easily found in $O(n)$ by computing the numbers one by one up to $n$. However, there are also faster ways, as we will see.
We can start from an iterative approach, to take advantage of the use of the formula $F_n = F_{n-1} + F_{n-2}$, therefore, we will simply precalculate those values in an array. Taking into account
the base cases for $F_0$ and $F_1$.
int fib(int n) {
int a = 0;
int b = 1;
for (int i = 0; i < n; i++) {
int tmp = a + b;
a = b;
b = tmp;
return a;
In this way, we obtain a linear solution, $O(n)$ time, saving all the values prior to $n$ in the sequence.
Matrix form¶
It is easy to prove the following relation:
$$\begin{pmatrix} 1 & 1 \cr 1 & 0 \cr\end{pmatrix} ^ n = \begin{pmatrix} F_{n+1} & F_{n} \cr F_{n} & F_{n-1} \cr\end{pmatrix}$$
Thus, in order to find $F_n$ in $O(log n)$ time, we must raise the matrix to n. (See Binary exponentiation)
struct matrix {
long long mat[2][2];
matrix friend operator *(const matrix &a, const matrix &b){
matrix c;
for (int i = 0; i < 2; i++) {
for (int j = 0; j < 2; j++) {
c.mat[i][j] = 0;
for (int k = 0; k < 2; k++) {
c.mat[i][j] += a.mat[i][k] * b.mat[k][j];
return c;
matrix matpow(matrix base, long long n) {
matrix ans{ {
{1, 0},
{0, 1}
} };
while (n) {
ans = ans*base;
base = base*base;
n >>= 1;
return ans;
long long fib(int n) {
matrix base{ {
{1, 1},
{1, 0}
} };
return matpow(base, n).mat[0][1];
Fast Doubling Method¶
By expanding the above matrix expression for $n = 2\cdot k$
$$ \begin{pmatrix} F_{2k+1} & F_{2k}\\ F_{2k} & F_{2k-1} \end{pmatrix} = \begin{pmatrix} 1 & 1\\ 1 & 0 \end{pmatrix}^{2k} = \begin{pmatrix} F_{k+1} & F_{k}\\ F_{k} & F_{k-1} \end{pmatrix} ^2 $$
we can find these simpler equations:
$$ F_{2k+1} &= F_{k+1}^2 + F_{k}^2 \\ F_{2k} &= F_k(F_{k+1}+F_{k-1}) = F_k (2F_{k+1} - F_{k})\\ .$$
Thus using above two equations Fibonacci numbers can be calculated easily by the following code:
pair<int, int> fib (int n) {
if (n == 0)
return {0, 1};
auto p = fib(n >> 1);
int c = p.first * (2 * p.second - p.first);
int d = p.first * p.first + p.second * p.second;
if (n & 1)
return {d, c + d};
return {c, d};
The above code returns $F_n$ and $F_{n+1}$ as a pair.
Periodicity modulo p¶
Consider the Fibonacci sequence modulo $p$. We will prove the sequence is periodic.
Let us prove this by contradiction. Consider the first $p^2 + 1$ pairs of Fibonacci numbers taken modulo $p$:
$$(F_0,\ F_1),\ (F_1,\ F_2),\ \ldots,\ (F_{p^2},\ F_{p^2 + 1})$$
There can only be $p$ different remainders modulo $p$, and at most $p^2$ different pairs of remainders, so there are at least two identical pairs among them. This is sufficient to prove the sequence
is periodic, as a Fibonacci number is only determined by its two predecessors. Hence if two pairs of consecutive numbers repeat, that would also mean the numbers after the pair will repeat in the
same fashion.
We now choose two pairs of identical remainders with the smallest indices in the sequence. Let the pairs be $(F_a,\ F_{a + 1})$ and $(F_b,\ F_{b + 1})$. We will prove that $a = 0$. If this was false,
there would be two previous pairs $(F_{a-1},\ F_a)$ and $(F_{b-1},\ F_b)$, which, by the property of Fibonacci numbers, would also be equal. However, this contradicts the fact that we had chosen
pairs with the smallest indices, completing our proof that there is no pre-period (i.e the numbers are periodic starting from $F_0$).
Practice Problems¶
|
{"url":"https://cp-algorithms.com/algebra/fibonacci-numbers.html","timestamp":"2024-11-08T05:21:19Z","content_type":"text/html","content_length":"148538","record_id":"<urn:uuid:7b4e9b89-3891-43ac-8f41-9cec7db7dda3>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00829.warc.gz"}
|
Parallelogram Area Calculator
A parallelogram area calculator is a tool or a function that calculates the area of a parallelogram. A parallelogram is a four-sided shape with opposite sides that are parallel and equal in length.
The area of a parallelogram is the amount of space inside the shape, and it can be calculated using the base and the height of the parallelogram.
Enter Information
Enter Information:
Fill the calculator form and click on Calculate button to get result here
What is Parallelogram Area Calculator
A Parallelogram Area Calculator is an online tool that calculates the area of a parallelogram, given its base and height. A parallelogram is a quadrilateral with two pairs of parallel sides, where
the opposite sides are equal in length and opposite angles are equal in measure.
To use the parallelogram area calculator, follow these steps:
Step 1: Input the base length
Enter the length of the base of the parallelogram in the designated input box.
Step 2: Input the height
Enter the height of the parallelogram in the designated input box.
Step 3: Calculate the area
Click on the “Calculate” button to obtain the area of the parallelogram.
The formula for calculating the area of a parallelogram is:
Area = base x height
The result will be given in square units, such as square meters or square feet.
Using a parallelogram area calculator can be a quick and convenient way to find the area of a parallelogram without having to manually perform the calculation.
Cutting down the timeframe for calculations
There are several ways to reduce the time it takes to perform calculations, depending on the specific situation and type of calculations involved. Here are a few general tips:
1. Use a calculator or spreadsheet software: For complex calculations or large datasets, it can be more efficient to use a calculator or spreadsheet software instead of doing the calculations by
2. Automate repetitive tasks: If you frequently perform the same calculations or tasks, consider automating them using scripting or programming tools. This can save a significant amount of time and
reduce the risk of errors.
3. Use shortcuts and formulas: Many software programs and calculators have built-in shortcuts and formulas that can simplify and speed up calculations. Learn and use these whenever possible.
4. Optimize data structures and algorithms: When working with large datasets or complex algorithms, optimizing the data structures and algorithms used can significantly reduce calculation time.
5. Use parallel processing: If your computer has multiple cores or processors, consider using parallel processing techniques to split the workload across multiple threads or processes. This can
speed up calculations by taking advantage of the additional processing power.
Overall, reducing calculation time requires a combination of technical skills, software tools, and optimization techniques. By applying these strategies, you can significantly improve your
productivity and efficiency when performing calculations.
Checking the calculation steps of the parallelogram area calculator
To check the calculation steps of the parallelogram area calculator, we can manually perform the calculation using the formula for the area of a parallelogram, which is:
Area = base x height
Let’s say we have a parallelogram with a base of 6 units and a height of 8 units. The area of the parallelogram can be calculated as follows:
Area = 6 x 8 Area = 48 square units
To check if the parallelogram area calculator is accurate, we can input these values into the calculator and compare the result to our manual calculation. If the calculator provides the same result,
we can confirm that the calculation steps are correct.
For example, if we input a base length of 6 units and a height of 8 units into the parallelogram area calculator, we should obtain an area of 48 square units. If the result matches our manual
calculation, we can be confident that the calculation steps of the calculator are correct.
It’s always a good idea to check the results of an online calculator against manual calculations to ensure accuracy, especially when working with important or complex calculations.
|
{"url":"https://helpingwithmath.com/math-calculators/parallelogram-area-calculator/","timestamp":"2024-11-09T04:27:03Z","content_type":"text/html","content_length":"114802","record_id":"<urn:uuid:b2ab8afb-c9c5-47cb-b717-01ae6b00feec>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00665.warc.gz"}
|
Theanets 0.7.3 documentation
class theanets.layers.recurrent.GRU(size, inputs, name=None, activation='relu', **kwargs)¶
Gated Recurrent Unit layer.
The Gated Recurrent Unit lies somewhere between the LSTM and the RRNN in complexity. Like the RRNN, its hidden state is updated at each time step to be a linear interpolation between the previous
hidden state, \(h_{t-1}\), and the “target” hidden state, \(h_t\). The interpolation is modulated by an “update gate” that serves the same purpose as the rate gates in the RRNN. Like the LSTM,
the target hidden state can also be reset using a dedicated gate. All gates in this layer are activated based on the current input as well as the previous hidden state.
The update equations in this layer are largely those given by [Chu14], page 4, except for the addition of a hidden bias term. They are:
\[\begin{split}\begin{eqnarray} r_t &=& \sigma(x_t W_{xr} + h_{t-1} W_{hr} + b_r) \\ z_t &=& \sigma(x_t W_{xz} + h_{t-1} W_{hz} + b_z) \\ \hat{h}_t &=& g\left(x_t W_{xh} + (r_t \odot h_{t-1}) W_
{hh} + b_h\right) \\ h_t &=& (1 - z_t) \odot h_{t-1} + z_t \odot \hat{h}_t. \end{eqnarray}\end{split}\]
Here, \(g(\cdot)\) is the activation function for the layer, and \(\sigma(\cdot)\) is the logistic sigmoid, which ensures that the two gates in the layer are limited to the open interval (0, 1).
The symbol \(\odot\) indicates elementwise multiplication.
□ bh — vector of bias values for each hidden unit
□ br — vector of reset biases
□ bz — vector of rate biases
□ xh — matrix connecting inputs to hidden units
□ xr — matrix connecting inputs to reset gates
□ xz — matrix connecting inputs to rate gates
□ hh — matrix connecting hiddens to hiddens
□ hr — matrix connecting hiddens to reset gates
□ hz — matrix connecting hiddens to rate gates
□ out — the post-activation state of the layer
□ pre — the pre-activation state of the layer
□ hid — the pre-rate-mixing hidden state
□ rate — the rate values
[Chu14] (1, 2) J. Chung, C. Gulcehre, K. H. Cho, & Y. Bengio (2014), “Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling” http://arxiv.org/abs/1412.3555v1
__init__(size, inputs, name=None, activation='relu', **kwargs)¶
│__init__(size, inputs[, name, activation]) │ │
│add_bias(name, size[, mean, std]) │Helper method to create a new bias vector. │
│add_weights(name, nin, nout[, mean, std, ...])│Helper method to create a new weight matrix. │
│connect(inputs) │Create Theano variables representing the outputs of this layer. │
│find(key) │Get a shared variable for a parameter by name. │
│initial_state(name, batch_size) │Return an array of suitable for representing initial state. │
│log() │Log some information about this layer. │
│output_name([name]) │Return a fully-scoped name for the given layer output. │
│setup() │ │
│to_spec() │Create a specification dictionary for this layer. │
│transform(inputs) │Transform inputs to this layer into outputs for the layer. │
│input_size│For networks with one input, get the input size. │
│num_params│Total number of learnable parameters in this layer. │
│params │A list of all parameters in this layer. │
Transform inputs to this layer into outputs for the layer.
inputs : dict of theano expressions
Symbolic inputs to this layer, given as a dictionary mapping string names to Theano expressions. See base.Layer.connect().
outputs : dict of theano expressions
A map from string output names to Theano expressions for the outputs from this layer. This layer type generates a “pre” output that gives the unit activity before applying the
Returns: layer’s activation function, a “hid” output that gives the post-activation values before applying the rate mixing, and an “out” output that gives the overall output.
updates : sequence of update pairs
A sequence of updates to apply to this layer’s state inside a theano function.
|
{"url":"https://theanets.readthedocs.io/en/stable/api/generated/theanets.layers.recurrent.GRU.html","timestamp":"2024-11-02T21:42:22Z","content_type":"application/xhtml+xml","content_length":"20472","record_id":"<urn:uuid:915f4b8c-d6d4-4736-9817-00e8819072b6>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00767.warc.gz"}
|
Absolute Cell reference in VB
Sep 12, 2014
Hello all,
Can anyone advise how to reference a cell in VB so that the macro continues to work even if rows / columns are inserted in the worksheet at a later date.
I am putting together a form for other users to complete and using code to ensure that certain rules are followed for defaults and fields for completion. However, just using the cell ref (C4) or
($C$4) seems to fail if I then insert a row above row 4.
For ref. an example from the code is:
If Not Application.Intersect(Target, Range("C4")) Is Nothing Then
Application.EnableEvents = False
If Range("C4") = "Capped Usage" Then
Rows("5:5").EntireRow.Hidden = False
Rows("5:5").EntireRow.Hidden = True
Range("C17") = ""
End If
Application.EnableEvents = True
End If
Any help would be much appreciated - note that I am (very) new to VB and am mostly using trial and error from stuff copied from the net!
Excel Facts
Highlight Duplicates
Home, Conditional Formatting, Highlight Cells, Duplicate records, OK to add pink formatting to any duplicates in selected range.
Hello all,
Can anyone advise how to reference a cell in VB so that the macro continues to work even if rows / columns are inserted in the worksheet at a later date.
I am putting together a form for other users to complete and using code to ensure that certain rules are followed for defaults and fields for completion. However, just using the cell ref (C4) or
($C$4) seems to fail if I then insert a row above row 4.
For ref. an example from the code is:
If Not Application.Intersect(Target, Range("C4")) Is Nothing Then
Application.EnableEvents = False
If Range("C4") = "Capped Usage" Then
Rows("5:5").EntireRow.Hidden = False
Rows("5:5").EntireRow.Hidden = True
Range("C17") = ""
End If
Application.EnableEvents = True
End If
Any help would be much appreciated - note that I am (very) new to VB and am mostly using trial and error from stuff copied from the net!
You are adding rows above row 4, and still want C4 to be the cell with the variable? Or will it be moved down?
What you can do is name the cell and row in question, and not use cell references such as "C4" but rather change it to a specific name of your choosing.
Meaning you should make your code like this, and change the cells in question, and ROW to the names you want, I used fairly obvious ones.
Private Sub Worksheet_Change(ByVal Target As Range)
If Not Application.Intersect(Target, Range("[COLOR=#ff0000]CellC4[/COLOR]")) Is Nothing Then
Application.EnableEvents = False
If Range("[COLOR=#ff0000]CellC4[/COLOR]") = "Capped Usage" Then
Range("[COLOR=#ff0000]Rownr5[/COLOR]").EntireRow.Hidden = False
Range("[COLOR=#ff0000]Rownr5[/COLOR]").EntireRow.Hidden = True
Range("[COLOR=#ff0000]CellC17[/COLOR]") = ""
End If
Application.EnableEvents = True
End If
End Sub
You name them like this in the initial setup, if you add a row, the "Rownr5", will be row 6, but is still named "Rownr5" and the macro will still work.
If you wonder how to change the names of cells/rows, look at this:
How do I create a named cell in Microsoft Excel?
Sep 12, 2014
Thank you,
This worked a treat. Took me a while to realise that I had to change Row to Range to cope with the cell group on the hidden command but after that it is just what I needed.
Thanks again!
Thank you,
This worked a treat. Took me a while to realise that I had to change Row to Range to cope with the cell group on the hidden command but after that it is just what I needed.
Thanks again!
Great, and you're welcome
|
{"url":"https://www.mrexcel.com/board/threads/absolute-cell-reference-in-vb.805025/","timestamp":"2024-11-12T07:12:08Z","content_type":"text/html","content_length":"114022","record_id":"<urn:uuid:6f7eefde-b450-4075-b7b6-9a6ab21d1980>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00444.warc.gz"}
|
Geodesic interpolation for reaction pathways
The development of high throughput reaction discovery methods such as the ab initio nanoreactor demands massive numbers of reaction rate calculations through the optimization of minimum energy
reaction paths. These are often generated from interpolations between the reactant and product endpoint geometries. Unfortunately, straightforward interpolation in Cartesian coordinates often leads
to poor approximations that lead to slow convergence. In this work, we reformulate the problem of interpolation between endpoint geometries as a search for the geodesic curve on a Riemannian
manifold. We show that the perceived performance difference of interpolation methods in different coordinates is the result of an implicit metric change. Accounting for the metric explicitly allows
us to obtain good results in Cartesian coordinates, bypassing the difficulties caused by redundant coordinates. Using only geometric information, we are able to generate paths from reactants to
products which are remarkably close to the true minimum energy path. We show that these geodesic paths are excellent starting guesses for minimum energy path algorithms.
The search for minimum energy paths (MEPs) is a ubiquitous task in the study of chemical reactions. The MEP,^1,2 often referred to as the “reaction path,” provides a compact description of the
rearrangement of atoms from one molecular structure to another, forming the basis of our intuitive understanding of chemical reaction mechanisms. The highest energy point along the MEP is a
transition state, which allows an estimate of reaction rates through transition state theory (TST).^3–5 Recently, the ab initio nanoreactor^6 and similarly motivated methods^7–11 for automatic
reaction discovery have been introduced which require massive numbers of reaction rate calculations. This has intensified the demand for highly efficient and robust methods for MEP optimization.
Construction of a MEP often requires a reasonable path from reactants to products as an initial guess. These initial guess paths are usually generated by interpolation, such as Linear Synchronous
Transit (LST),^12 but can also be generated from molecular dynamics simulations. Nudged Elastic Band (NEB)^13 and the closely related “string” methods^14 are often used to refine the initial pathway.
A practical difficulty is that the rate of convergence as well as the quality of the final converged MEP depends strongly on the choice of the initial path.
Interpolation in Cartesian coordinates can result in improbable high-energy structures (for example, when atoms come into close contact) and is one of the primary sources of instability in MEP
optimization.^15 In fact, the topological structure of molecular potential energy surfaces (PESs) is better captured by internal coordinates.^16–18 Throughout this paper, we use internal coordinates
to denote a set of (possibly redundant) functions of the Cartesian coordinates that are invariant to overall rotations and translations of the molecule.
In general, it is impossible to assign a single global set of linearly independent internal coordinates that accurately describes the structure of a molecule. However, in the neighborhood of any
given configuration (such as the vicinity of an equilibrium structure), it is always possible to construct a well-defined set of coordinates. In the language of differential geometry, these are local
“coordinate charts” and the internal coordinate manifold is locally isomorphic to R^3N-6, where N is the number of atoms. Globally, however, the topology of the internal coordinate manifold is more
complex and does not resemble a Euclidian space.
Ignoring the non-linear dependencies between internal coordinates^19 has led to some efficient algorithms^20 for the optimization of equilibrium geometries and transition states.^21–23 These methods
form the generalized inverse of the Wilson G matrix^24,25 to transform the gradients and Hessian matrices. The G matrix transformation, however, is geometry dependent. Generalization of this local
analysis method to problems that involve the global structure of the internal coordinate space, such as interpolation or reaction path optimization, requires the tools of differential geometry. One
example of such analysis can be found in previous work on potential energy surface interpolation,^26 where Cartesian and internal coordinate spaces are treated as Riemannian manifolds. In that work,
singular value decomposition (SVD) of the Wilson B matrix is used to construct local coordinate charts in which local Taylor expansions of the potential energy are expressed. These Taylor expansions
are then smoothly patched together to obtain an interpolated global potential energy surface.
The space of configurations of a molecule is also known as the reduced configuration space^27 and is constructed as the quotient space $R3N/SE3$, where SE(3) is the Special Euclidean group of
translations and rotations acting on $R3$. In other words, a point in the reduced configuration space corresponds to all the possible Cartesian geometries of a molecule that differ only by an
overall translation or rotation. In cases where chirality of the molecule is not of concern, the reduced configuration space is sometimes instead defined as $R3N/E3$, where geometries that are
mirror images are also collapsed to the same point in the reduced configuration space. The group of rotations, SO(3), has a non-trivial topology, so the topology of the quotient space is expected to
also be rather complicated. A concrete implication is the algebraic “syzygy” relationships between bond lengths that arise for molecules with more than four atoms.^28,29
Current path interpolation methods such as LST,^12 Image Dependent Pair Potential (IDPP),^30 and the Nebterpolator^31 employ a two-step process to perform interpolation in a space of redundant
coordinates. A direct interpolation is first performed to yield a straight line in the redundant internal coordinate space. Because this does not account for nonlinear constraint relationships
between the redundant internal coordinates, mathematically inconsistent sets of internal coordinate values can be generated. For example, the triangle inequality amongst bond lengths can be violated.
Here, we refer to these inconsistent combinations of internal coordinate values as infeasible. The second step of the above algorithms must therefore find a path that resides in the feasible
subspace, ideally lying close to the original infeasible path. Often, a minimization process is used to find valid geometries with internal coordinates closest to these interpolated values. However,
as we will show in detail in Secs. II and IV, various problems can arise in these methods.
In this work, we propose a different interpolation approach, which performs displacements only in Cartesian coordinates. The method, which is closely related to Intrinsic Reaction Coordinate (IRC)
methods, is detailed in Sec. III, where we define the interpolation path between two geometries as the geodesic curve in a Riemannian manifold. An efficient algorithm is provided to determine
approximate geodesics by minimizing the arc length exclusively within Cartesian coordinates. We then show how our approach is connected to the local G matrix projection^20 and B matrix SVD methods.^
26 In fact, we show that the G matrix projection method is equivalent to choosing a specific local chart on the manifold in this approach and explain why such projection schemes are inappropriate for
problems involving global geometry such as interpolation or reaction path optimization.
The basic connection between geodesic curves and interpolation paths that we exploit in this work can be understood without the cumbersome mathematics of differential geometry. When the initial and
final structures are in close proximity, the difficulty of infeasible coordinate values disappears. In such cases, all existing interpolation methods work well. Taking advantage of this,
interpolation paths for reactions involving large amplitude motions can be obtained by requiring the internal coordinates of any small segments of the path to be close to the linear interpolation of
the coordinate values at the end points of each segment. This is in fact equivalent to the definition of a geodesic curve and is the main insight exploited in our method.
Geodesic interpolation is formulated as an approximation to the geodesic formalism of IRC^32 and bears some similarity to the variational reaction energy (VRE)^33–35 and variational reaction
coordinate (VRC) methods^36,37 for MEP optimization. The key idea is to find a metric for the reduced configuration space manifold that approximates the interactions which define the potential
energy. This explains the surprising ability of geodesic interpolation to generate paths that are very similar to MEPs, using only geometric information about the molecule.
In Sec. IV, numerical examples are provided to illustrate the interpolation process in detail. Examples are provided to show how and why a number of previous interpolation methods fail. We then show
that the new method leads to more meaningful paths with lower energy, serving as better starting guesses for NEB calculations that converge quickly and reliably and also tending to lead to lower
energy MEPs.
Interpolation between geometries is conventionally performed by individually interpolating each coordinate, treating all coordinates as independently varying. To properly characterize large amplitude
motions such as chemical reactions, it is often necessary to use more coordinates than the total number of degrees of freedom in the system. In such cases, there are necessarily functional
relationships between internal coordinates such that they are not linearly independent. Such sets of internal coordinates are referred to as redundant internal coordinates.
Treated as if they were independent, redundant internal coordinates span a space isomorphic to $RM$ for some M > 3N − 6 (the dimensionality of the reduced configuration space). The image of the
3N-dimensional Cartesian coordinate space embedded in this redundant internal coordinate space, i.e., the subset of redundant internal coordinate space that corresponds to a realizable Cartesian
geometry, is hereafter referred to as the feasible subspace. In the redundant internal space, the feasible subspace is curved. Therefore, direct interpolation paths constituting a straight-line
segment in $RM$ can lie outside of the feasible space and may not be physically meaningful.
This problem has long been known, and a number of approaches have been proposed to overcome it. In the widely used Linear Synchronous Transit (LST) interpolation scheme,^12 the interpolation path is
obtained by minimizing the difference between the (feasible) internal coordinates of the images and the targeted (but potentially infeasible) direct interpolation values through a weighted
least-squares procedure. As reported previously,^30 this highly nonlinear minimization tends to produce discontinuous paths. The problem is exacerbated in larger molecules where the interpolated bond
lengths can be far from any feasible geometry and in molecules with large amplitude angular and/or torsional motions. The Image Dependent Pair Potential (IDPP) method^30 was introduced in order to
alleviate these difficulties. IDPP uses the squared norm of the difference between the internal coordinates of an image and their target interpolated values as the energy of that image in a NEB-like
optimization process. In other words, for each image, the weighted distance to the infeasible interpolation is minimized like in LST, but motion along the path is determined by a spring force which
penalizes large geometric changes along the path. The energy function to be minimized takes the form
where $R→k$ are the Cartesian coordinates of the kth image (i.e., bead), $riR→k$ are the Cartesian coordinates of the ith atom in the kth image, and $dijk$ is the target bond length between atoms i
and j for image k, obtained by linearly interpolating the corresponding bond length between initial and final points. The weighting function w(d) = 1/d^4 is used to ensure that distances between
atoms that are close to each other and strongly interacting are more accurately reproduced than those for pairs of atoms that are far apart and weakly interacting. This weight factor is crucial for
the quality of the resulting interpolation. In fact, the chemical intuition carried in such weight factors is the most important reason why IDPP and LST work better than Cartesian interpolation.
The same functional is used as the target function in the LST least-squares minimization step. However, in IDPP, each image has a different energy expression (hence the name of the method—Image
Dependent Pair Potential). In the subsequent NEB-like process, a spring force governs the movement of images along the path, forcing the path to be smooth and preventing discontinuities. However, it
is important to note that once spring forces between adjacent images are introduced, it is not obvious how to formulate the IDPP process as the minimization of an objective function (similar issues
arise in NEB). As we will show in Sec. IV, the IDPP process is prone to oscillation and can fail to converge.
A more robust method to remove discontinuities in interpolated paths is provided by the Nebterpolator,^31 which performs direct interpolation in a set of redundant internal coordinates that also
includes bond angles and torsions. The Nebterpolator method often uses trajectories from molecular dynamics simulations as starting guesses and is shown in Sec. IV to be significantly more robust
than the Cartesian linear interpolation guesses generated by LST and IDPP. Similar to LST, the initial guess path in the redundant coordinates is refined through least-squares minimization to obtain
geometries that reproduce the interpolated values of internal coordinates as closely as possible. In the Nebterpolator, discontinuities are actively detected by monitoring the Cartesian distance
between each pair of neighboring images. When the Cartesian distance between two neighboring images is greater than the length of an adjacent segment by a factor or two or more, the path is
considered discontinuous. In such cases, the square of the Cartesian distance between these two images is added to the target function as a spring term, and the weight of this term is automatically
increased until discontinuities disappear. This pulls neighboring images closer in Cartesian space and ensures a smooth interpolated path.
Although both IDPP and Nebterpolation can remove discontinuities in the interpolated path, this comes at the price of significant degradation in the quality of the final interpolated paths (see Sec.
IV). The NEB-like process in IDPP sometimes creates extremely distorted intermediate structures, leading to instability and convergence failures. Although Nebterpolation can consistently generate
continuous pathways, the resulting pathways can have very large barriers because the large spring term causes portions of the path to effectively revert to Cartesian interpolation.
To summarize, all three methods described above start with direct interpolation paths in redundant internal coordinates, which are often infeasible. This is followed by an optimization process, which
identifies a feasible path close to the direct interpolation path. However, when the end points are very different geometries, the direct interpolation is often very distant from the feasible set,
and the highly nonlinear second step is complicated by the existence of multiple local minima.
From the discussion above, it is clear that an appropriate choice of coordinates can result in interpolation paths that resemble MEPs. At the same time, the need to operate in redundant internal
coordinates introduces various difficulties. In this section, we use the tools of differential geometry to investigate the influence of coordinate choice on interpolation paths and MEPs. We show that
they are both coordinate independent and the perceived difference in interpolations is caused by implicit changes of the metric. In fact, the appropriate metric for interpolation paths is already
known from previous results on IRCs. This realization leads us to develop a new interpolation scheme that provides better approximations to MEPs, while avoiding the need to operate in redundant
A. The influence of metric and coordinates on minimal energy paths and interpolations
There has been significant confusion in the literature concerning the dependence of the MEP on the choice of coordinates, which has been detailed and explained previously.^38–40 Here, we carefully
distinguish between the metric and the choice of coordinates, which is key to avoiding previous misunderstandings. In the framework of modern coordinate-free differential geometry, a Riemannian
manifold is defined as a pairing of a topological space with a metric. If the metric is not correctly considered during coordinate transformations, the MEP appears to depend on the chosen coordinate
system. However, it has been established^38–40 that a correct formulation in terms of covariant derivatives^41 leads to an MEP that is independent of the coordinate system. On the other hand, given a
definition of MEP, the corresponding metric is uniquely determined. The most widely used definition of MEP is the Intrinsic Reaction Coordinate (IRC)^10,42 proposed by Fukui, for which the kinematic
metric,^40 i.e., the mass-weighted Euclidean metric, should be used.
Given that MEPs are coordinate independent, the reader might find it odd that interpolations in internal coordinates would provide a better estimation for chemical reactions compared to Cartesian
coordinates. Indeed, a proper definition of interpolation should depend only on the metric and not on the coordinates used. However, metric choices in interpolation methods are often implicit and
empirical. Inconsistencies within the same procedure, as discussed in Sec. II, frequently cause divergence and discontinuities. To design better performing interpolation schemes, it is crucial to
first establish a metric that yields paths close to true MEPs.
An important relation between geometry and energetics of molecules was provided by Tachibana and Fukui in the geodesic formulation of IRC^32 and in their analysis of the differential geometry of
chemical reactions.^40 Fukui showed the existence of one particular metric under which geodesic curves correspond to the IRCs. Note that this does not imply metrics can be changed at will. As
representations of IRCs, gradient descent paths are associated with the mass-weighted Cartesian metric, and the geodesic curves are associated with Fukui’s metric. The expression for reaction paths
needs to be properly adapted whenever metrics are changed to retain the correct behavior.
Key concepts of differential geometry used in this work are briefly recapitulated in Appendix A. For simplicity, from this point on, we will adopt the Einstein summation convention. Fukui, et al.
showed that the IRC is a geodesic curve on a Riemannian manifold, where the metric, in the form of the following length element, is derived from the potential energy of the molecule
where $gij$ is the Euclidean metric of mass-weighted Cartesian in the current internal coordinate space, while $gkl$ is its inverse. This length element definition is invariant under different
choices of coordinates q as it should be. A geodesic curve is a generalization of a straight line in a curved space and is therefore directly related to interpolation. When the feasibility of
redundant coordinates is ignored, direct interpolations are just straight lines in the internal coordinate space. In fact, by the Nash embedding theorem,^43,44 it is possible to embed Fukui’s
Riemannian manifold with the metric given by Eq. (2) into a Euclidean space of high dimensionality, and if the basis of this space is used as “internal coordinates,” then finding a geodesic curve on
the hypersurface of feasible geometries will yield the exact IRC.
Unfortunately, the Nash embedding theorem only states the existence of such an embedding with an upper bound for the required dimensionality of the embedding space. Generating such an embedding
exactly would require full knowledge of the potential energy surface (including its first and second derivatives with respect to molecular displacements), which is generally not available.
Fortunately, with the tools of Riemannian geometry, geodesics can be calculated with any coordinates or basis. Indeed, Fukui’s seminal contribution made clear the distinction between the metric
tensor used to define a notion of distance on the PES manifold and the different possible coordinate systems used to represent the PES. Importantly, the metric tensor is an entity that is independent
of the choice of coordinates. Using different coordinate systems will result in different matrix representations of the metric tensor, but the geodesic curves calculated using the same metric are
always the same.
This suggests that the reason some interpolation schemes work better than others is not because of the choice of coordinates, but rather the (possibly implicit) choice of the metric that is used.
With the correct metric, geodesics can be directly located using Cartesian coordinates, avoiding the need to explicitly solve the embedding problem.
When mass-weighted Cartesians are used as the basis, Eq. (2) simplifies to
and therefore the matrix of the metric tensor in mass-weighted Cartesians is
Supposing that internal coordinates q are used, the Euclidean metric gives the metric tensor
and one can see that a straight line in {q[i]} will be a good approximation to the true IRC if and only if
that is, if the coordinates q capture the shape of the potential or equivalently, if the potential energy surface is almost flat in q. The success of interpolating global PESs in coordinates
constructed from inverse bondlengths^26,45 implies that the PES appears nearly flat when expressed in inverse bondlengths.
The idea in this work is to keep the induced metric structure of Eq. (5) so that the manifold is a local isometric immersion in the Euclidean space of redundant internal coordinates; in other words,
we choose a metric that is Euclidean when using a specific set of redundant internal coordinates (see Appendix C for details). This property allows for a straightforward interpretation of the
internal coordinate space consistent with the conventional interpolation picture. This also greatly simplifies the process of geodesic optimization (see Appendix B). The approximation in Eq. (6) is
easily achieved by using one of the traditional diatomic potential functions, such as Morse or Leonnard-Jones, for the internal coordinates q. Chemically, it is generally true that any two atoms
interact more strongly at smaller distances, where the interactions often follow exponential or inverse power scaling with the distance. On the other hand, it is very difficult to predict if the
interaction between two atoms is attractive or repulsive. The form of Eq. (6) allows us to properly account for the general distance scaling of interatomic interactions, without the need to determine
their signs, which greatly simplifies the problem.
In this work, the functions q[i] are defined as the following scaled inter-atomic distance coordinates (denoted as R̃[kl]):
where i labels the coordinates, each of which depends on a pair of atoms. The Euclidean distance between the kth and lth atoms is denoted as r[kl], and $rekl$ denotes an approximate equilibrium bond
distance given as the sum of the covalent radii of the two atoms. The parameters α and β are included to allow some tuning of the relative importance of short- and long-range interactions. In this
work, we chose α = 1.7 and β = 0.01, a choice which reflects the fact that short-range repulsion dominates inter-atomic interactions. We find that the quality of the interpolation (see below) is
quite insensitive to small changes in the values of these parameters.
The procedure is illustrated for a two-dimensional model problem in Fig. 1. Here, the potential is computed from a model system with three Morse potential terms from the three distances labeled in
the middle panel of Fig. 1. Given three reference points x[1]=(2, −1), x[2]=(−2, −1), and x[3]=(0, 0), the model potential is defined as
$Ex=1.5e−1.5x−x1−0.22+1.5e−1.5x−x2−0.22+ 1.8e−1.5x−x3−0.32.$
Three distance coordinates are chosen as the redundant coordinate system. These coordinates are defined according to Eq. (7) using distances to the three reference points. Note that we choose
different exponential factors in the potential and the coordinate definition to ensure generality of the example. Using a set of internal coordinates whose Euclidean metric loosely approximates
Fukui’s metric, the MEP becomes nearly straight, as shown in the right panel of Fig. 1. This is the case despite the fact that a general exponential factor is used instead of the exact one in the
Morse potentials and that the repulsive higher order term in the Morse potential was ignored in the coordinate scaling. Changing from the Cartesian Euclidean metric to the induced Euclidean metric of
internal coordinates causes the region where energy changes rapidly to be stretched while regions where energies are nearly constant are compressed. As shown in the right panel of Fig. 1, the
asymptotic region, which is generally flat, is compressed near a point while the regions on the PES where energies are extremely high are stretched into long legs on the embedded reduced
configuration space hypersurface.
It is of course possible to incorporate more elaborate choices for the coordinates, for example, by taking into account atom types and their typical radii. Using Eq. (7), coordinates may also be
constructed from force fields or ab initio data. Such scaling methods will likely further improve the quality of the interpolation. However, we leave this to future work as our main aim here is to
show how to effectively find the interpolation path given some arbitrary coordinate system. Of course, there will be diminishing returns as the coordinates become more complex and more molecule
specific since the coordinates will be both harder to determine and less transferable.
B. Interpolation with geodesics
Let us denote the reduced configuration space of the molecule as M. Recall that $M=R3N/SE(3)$ when chirality needs to be resolved and $M=R3N/E(3)$ otherwise. We start with a set of K redundant
internal coordinate functions $q:M→RK$
From this set of functions, a Riemannian manifold (M, g) can be defined, where the metric g is defined at each p ∈ M by
Here, we have used the Cartesian basis for ease of exposition. An approximation to the MEP from point A to B is obtained by constructing the geodesic curve that connects them in the manifold (M, g).
Detailed analysis is given in Appendix A, and we just summarize here.
Let us denote a curve in Cartesian coordinates between A and B as γ(t) = x^i(t)x[i], where {x[i]} is the Cartesian basis. The length of the path from A to B on M is given as [this equation is derived
in Appendix A, where it can be found as Eq. (A10)]
Inserting the matrix expression of the metric [Eq. (10)] into Eq. (11) yields
Here and in the following, we use single overdots to denote first derivatives with respect to the parameter of the curve, and double overdots to denote second derivatives
Henceforth, we omit the explicit dependence of each of the quantities on γ(t) for simplicity.
C. Numerical method to find approximate geodesics between a pair of points
Efficient methods are available for finding geodesic paths on manifolds using the fast marching method.^46,47 Widely used in computer vision^48 and robotics,^49 these methods accurately find the
shortest geodesic between two points. However, they require propagation of the distance function on a grid that spans the entire manifold. This is unrealistic for molecular systems because of the
high dimensionality of the configuration space.
One method related to the current work has seen much success in the field of computer aided design and modeling of 3D objects.^50 Similar to the treatment here, the shapes of real world objects are
constructed as a Riemannian manifold, and methods are developed to perform geodesic interpolations between shapes to estimate how objects deform from one shape to another. Shapes of objects were
represented with a triangular mesh of its surface, and the mesh is required to be the same for the entire path. This is usually not appropriate for describing the shape of molecules. Perhaps more
importantly, the method is designed for deformations that do not change the topology of the object and experiences problems of collision when large amplitude motions are involved. In chemistry,
however, the change in shape is often drastic, so a more general approach is necessary.
To find the geodesic curve between points A and B, we first need to find an efficient and accurate approach to numerically evaluate the integral of Eq. (12). We begin by representing the path by a
discrete series of images x^(n) = γ(t^(n)), where x^(1) = A and x^(N) = B, so that Eq. (12) can be written as the sum of the length of each segment between images
We require that the segment between each pair of neighboring images is a geodesic curve. Since any segment of a geodesic curve must also be a geodesic curve, the desired path minimizes the length
defined in Eq. (14). Although there is still no formula to calculate the exact length of these shorter geodesic segments, upper and lower bounds can be derived by slicing the path γ into a
sufficiently large number of pieces
$ln≤∑k=1mq(m−k)x(n)+kx(n+1)m− q(m−k+1)x(n)+(k−1)x(n+1)m.$
The meaning of these bounds can be easily understood when the reduced configuration space M is viewed as an immersion in the redundant internal coordinate space. As shown in Fig. 2, the geodesic is
the shortest feasible path between the two end points, feasible meaning that it lies entirely inside the reduced configuration space. The lower bound corresponds to a straight line in the redundant
internal space, which is in general infeasible. Being a straight line in the internal space, it is no longer than any path linking the two end points including the geodesic. The fact that it is
shorter than the geodesic attests to its infeasibility. The upper bound, on the other hand, is the reduced configuration space path that is straight in the Cartesian space with the Euclidean metric.
This path is curved with the interpolation metric, and being a feasible path, it is no shorter than the geodesic.
An approximate formula that strictly lies between the upper and lower bounds is also available
$ln≈qxn−qxn+xn+12+ qxn+xn+12−qxn+1.$
The derivations of these expressions are found in Appendix B. The numerical procedure to compute path length and its derivatives with respect to individual images is shown in Algorithm 1.
Algorithm 1.
function Length (x^(I), f^i, ∂[j]f^i)
input parameters
x^(I) Initial geometries
f^i Coordinate functions used to generate metric
∂[j]f^i Derivatives of coordinate functions
L Piecewise geodesic length
∇L ≡ {∂[j]^(I)L} Derivatives of path length with respect to each image
L: = 0 Approximate geodesic length
for all image index Ido
$qi(I):=fi(x(I))$ Compute internal coordinate values
$Bij(I):=∂jfi(x(I))$ Compute Wilson’s B matrices
$∂j(I)L:=0$ Derivatives of length with respect to each images
end for
forI: = 1 to N^image − 1 do
$x̃(I):=[x(I)+x(I+1)]/2$ Cartesian geometry of midpoint of images I and I+1
$q̃i(I):=fi(x̃(I))$ Internal coordinates of midpoint
$B̃ij(I,I):=∂q̃i(I)∂xj(I)=12∂jfi(x̃(I))$ B matrix at midpoint, with respect to each side
end for
end function
function Length (x^(I), f^i, ∂[j]f^i)
input parameters
x^(I) Initial geometries
f^i Coordinate functions used to generate metric
∂[j]f^i Derivatives of coordinate functions
L Piecewise geodesic length
∇L ≡ {∂[j]^(I)L} Derivatives of path length with respect to each image
L: = 0 Approximate geodesic length
for all image index Ido
$qi(I):=fi(x(I))$ Compute internal coordinate values
$Bij(I):=∂jfi(x(I))$ Compute Wilson’s B matrices
$∂j(I)L:=0$ Derivatives of length with respect to each images
end for
forI: = 1 to N^image − 1 do
$x̃(I):=[x(I)+x(I+1)]/2$ Cartesian geometry of midpoint of images I and I+1
$q̃i(I):=fi(x̃(I))$ Internal coordinates of midpoint
$B̃ij(I,I):=∂q̃i(I)∂xj(I)=12∂jfi(x̃(I))$ B matrix at midpoint, with respect to each side
end for
end function
As shown in Appendix B, the error of the lower bound formula [Eq. (15)] compared to the true length scales as O(Δl^3), while the error for both the upper bound [Eq. (16)] and the approximation of
Eq. (17) are O(Δl^2). The fact that Eq. (17) tends to overestimate the true length when images are far apart significantly improves the stability of the numerical process. The presence of a positive
error that scales quadratically with respect to individual segment lengths acts as an implicit regularization term. The overestimation error itself becomes effectively a regularization term, and path
length minimization using Eqs. (14) and (17) automatically suppresses large distances between images, driving them to be evenly distributed along the path without the need for an additional
redistribution step. We can therefore start from an unevenly distributed set of images, and path length minimization will automatically redistribute the images evenly.
Using Eqs. (14) and (17), the final optimization problem for the path takes the form
$minx∑n=1N−1qxn−qxn+xn+12 + qxn+xn+12−qxn+1.$
We find that the most robust strategy to minimize the path length is to adjust one image at a time. The index of the image being adjusted sweeps back and forth along the path until all images stop
moving. To ensure that the number of images is sufficiently large to describe the geodesic curve, the upper bound of Eq. (16) and the lower bound of Eq. (15) are computed after determining the
minimum path length with Eq. (18). These upper and lower bounds are compared with the approximate length of the final path, as given in Eq. (17). Additional images are added if the difference between
the two bounds is large, and Eq. (18) is used again to determine the optimal location for the images. This procedure is described in Algorithm 2.
Algorithm 2.
procedure Geodesic(x^(I), f^i, ∂[j]f^i, ϵ)
ΔX:= 0
forI:= 2 to N^image − 1 do
Minimize Length(x^(I), f^i, ∂[j]f^i) by adjusting image I only
Let the min length be L
Let the resulting coordinates for image I be y^(I)
x^(I): = y^(I)
end for
Repeat the previous loop with image index I sweeping in reverse order
until Δx < ϵ
Calculate lower bound L^lower using Eq. (14)
Calculate upper bound L^upper using Eq. (15) (m = 10 in this work)
ifL^lower < 0.95L or L^upper > 1.1Lthen
Calculate the bounds for each segment between neighboring images
Add midpoints for all pairs where the difference of the bounds is too large
Restart the procedure with the larger set of images
Procedure finished with current x^(I) giving the approximate geodesic.
end if
end procedure
procedure Geodesic(x^(I), f^i, ∂[j]f^i, ϵ)
ΔX:= 0
forI:= 2 to N^image − 1 do
Minimize Length(x^(I), f^i, ∂[j]f^i) by adjusting image I only
Let the min length be L
Let the resulting coordinates for image I be y^(I)
x^(I): = y^(I)
end for
Repeat the previous loop with image index I sweeping in reverse order
until Δx < ϵ
Calculate lower bound L^lower using Eq. (14)
Calculate upper bound L^upper using Eq. (15) (m = 10 in this work)
ifL^lower < 0.95L or L^upper > 1.1Lthen
Calculate the bounds for each segment between neighboring images
Add midpoints for all pairs where the difference of the bounds is too large
Restart the procedure with the larger set of images
Procedure finished with current x^(I) giving the approximate geodesic.
end if
end procedure
This procedure assumes that a series of images forming a continuous path between the end points is already known and can be used as a starting guess. In LST and IDPP, such an initial path is
generated through Cartesian linear interpolation. This, however, often causes problems because Cartesian interpolations can introduce collisions between atoms and they are generally qualitatively
different from the MEP. Further complicating the situation, in most reactions, many different local MEPs may exist between the same end points. These paths often have drastically different barrier
heights, and a poor starting guess can lead to interpolated paths with very high barriers. We approach this by sampling many initial paths and choose the one that has the shortest estimated geodesic
length. More specifically, we first find the Cartesian geometry that is closest to the average of the internal coordinates of the end points. This involves minimizing the differences between the
internal coordinates of this new geometry and the average of the internal coordinates of the end points, which is equivalent to LST with only 1 image. Both end points are used as starting points of
this minimization process for a number of iterations, and a moderate size random noise is added to the starting guess, which allows the process to converge to a number of different local minima. This
creates a set of three point paths. The path with the shortest length, calculated using Algorithm 1, is chosen and used to generate an initial path for Algorithm 2. This procedure is detailed in
Algorithm 3.
Algorithm 3.
procedure Interpolate(A, B, f^i, ∂[j]f^i, N^image, ϵ)
q^(mid): = [q(x^(A)) + q(x^(B))]/2
L^min: = ∞
for i: = 1 to 10 do
ifi is odd then
x^0: = A + random noise^* 0.1
x^0:=B + random noise^* 0.1
end if
Find x which minimizes $q(x)−q(mid)2$ with x^0 as starting point
L: = Length ([A,x,B],f^i,∂[j]f^i,ϵ)
ifL < L^minthen
L^min:= L
x^(mid):= x
end if
end for
Use Geodesic procedure to adjust position x^(mid) to optimize path [A,x^(mid),B]
Create path of N^image images with
$x(I)=A, 1≤I≤13Nimagex(mid), 13Nimage<I≤23NimageB, 23Nimage<I≤Nimage$
Use Geodesic procedure to adjust path [x^(I)]
end procedure
procedure Interpolate(A, B, f^i, ∂[j]f^i, N^image, ϵ)
q^(mid): = [q(x^(A)) + q(x^(B))]/2
L^min: = ∞
for i: = 1 to 10 do
ifi is odd then
x^0: = A + random noise^* 0.1
x^0:=B + random noise^* 0.1
end if
Find x which minimizes $q(x)−q(mid)2$ with x^0 as starting point
L: = Length ([A,x,B],f^i,∂[j]f^i,ϵ)
ifL < L^minthen
L^min:= L
x^(mid):= x
end if
end for
Use Geodesic procedure to adjust position x^(mid) to optimize path [A,x^(mid),B]
Create path of N^image images with
$x(I)=A, 1≤I≤13Nimagex(mid), 13Nimage<I≤23NimageB, 23Nimage<I≤Nimage$
Use Geodesic procedure to adjust path [x^(I)]
end procedure
D. Geodesic interpolation and variational reaction coordinate (VRC) method
The geodesic interpolation process is closely related to the Variational Reaction Coordinate (VRC) method for the optimization of reaction paths recently developed by Birkholz and Schlegel.^36,37 In
VRC, the steepest descent reaction path is obtained by minimizing the Variational Reaction Energy (VRE),^33–35 defined as
where U is the potential energy and x(t) is the reaction path. In the VRC method, reaction paths are expanded as linear combinations of a set of quartic B-spline functions. In fact, the authors
recognized that without the gradient term, the VRE becomes the path length in internal coordinates. In fact, in their later work,^37 the following arc length in redundant internal coordinates is
minimized as the initial step of the VRC process
$St1,t2=∫t1t2τTBTBτdt, where Bki≡∂qk∂xi,τi≡ẋi,$
which is strongly reminiscent of Eq. (12). However, as the authors have discussed, the VRC method as currently formulated has some problems and does not take full advantage of the redundant internal
coordinates. Similar to the work presented in this paper, the VRC paths are always constructed in Cartesian coordinates to avoid infeasibility problems. However, the paths are expanded with spline
functions in Cartesian coordinates. The resulting equations are not invariant to overall translations and rotations. Because the metric tensor is degenerate in translational and rotational degrees of
freedoms, the VRC equations are singular and need explicit projection of rotational and translational degrees of freedom to remain stable.
A projection scheme is used in VRC to address the effect of coordinate redundancy. As shown in Appendix D, this projection scheme is non-holonomic. Because the derivatives of the projection matrix
are omitted, integration in their projected basis breaks coordinate invariance. In order to obtain an exact coordinate invariant formulation of integrals using redundant coordinates, contributions
from the commutators between basis components must be explicitly included, as shown in Appendix A.
Perhaps more importantly, in VRC, no scaling is performed for internal coordinates. The arc length optimization step effectively drops the first factor in Eq. (19). As we show in this paper, a much
better approximation to Eq. (19) can be achieved by scaling internal coordinates to make the first factor nearly constant. Note that Eq. (19) simplifies directly into Eq. (20) when the coordinates
satisfy ∂U/∂x^i ≈ 1.
On the other hand, the method derived in this work is invariant with respect to overall translations and rotations. More precisely, path lengths are properly defined on the reduced configuration
space. Singularities due to translations and rotations are therefore avoided by construction.
In this section, a number of numerical examples of our geodesic interpolation are provided, along with comparisons to other interpolation methods for the same systems.
A. Finding geodesics from a series of images: Imidazole hydrogen migration
We first compare the performance of different interpolation methods when starting from a given initial guess path. To compare the performance and stability of the geodesic approach and other
interpolation schemes, the reaction path for intramolecular hydrogen migration in imidazole is generated using our new geodesic (Algorithm 2), LST, IDPP, and Nebterpolator interpolations. The LST
interpolation is performed in internal coordinates. To make it easier to visualize the interpolated paths, the molecule is constrained at the geometry of the deprotonated structure, and only the
migrating hydrogen is allowed to move. This allows us to plot the path of the hydrogen in a two-dimensional plane. The metric can also be visualized with this simplification. Since only one atom is
moving and the movement is constrained to the plane, each point on the plane corresponds to a unique molecular geometry.
We first generated a linear interpolation in Cartesian coordinates and shifted it by 2 Å so that no atomic collisions occur (gray dotted line labeled “Initial Path” in Fig. 3). This new set of
initial images is then used for the initial guess geometries for each of the interpolation methods. The initial path and the interpolations generated from each method are shown in Fig. 3. The true
MEP calculated with NEB at the UB3LYP/6-31g level of theory is depicted as a solid black line for comparison.
As shown in Fig. 3, all interpolation methods except for LST generated continuous paths. In LST, the separate solution branches of its target function are disjoint. As a result, it can experience
discontinuities even when seeded with a closely spaced, “continuous,” set of images. Nebterpolation produces a smooth path, but this path is relatively far away from the MEP. This is because no
scaling is done for internal coordinates; hence, its metric is a poor approximation to the Fukui metric. IDPP, on the other hand, generates a continuous path that reproduces the MEP reasonably well.
It does, however, contain a kink. Surprisingly, this kink in the curve does not go away even when the convergence threshold of IDPP is significantly tightened. By shifting the initial guesses by a
different amount, it is discovered that at some shift values, this kink does not appear, while with some other shift values, the IDPP process diverges and hydrogen atoms are ejected from the vicinity
of the molecule. This shows that, as discussed above, IDPP convergence behavior is not always robust. Geodesic interpolation produces a continuous curve close to the MEP, and the path is stable as
long as the initial guess does not pass through the C-H bond. The eigenvectors of the metric tensor are shown in the left panel of Fig. 4 to demonstrate the curved nature of the manifold. The
eigenvector with larger eigenvalue is plotted with black grid lines, while the eigenvector that corresponds to the smaller eigenvalue is plotted with blue grid lines. At most geometries, the two
eigenvalues of the metric tensor, expanded in the Cartesian basis, are well-separated, often by many orders of magnitude. In the right panel of Fig. 4, we compare this to Fukui’s IRC metric given by
Eq. (5), where the leading eigenvector corresponds to the energy gradient. The second eigenvector always has a zero eigenvalue and points along the energy isosurface. Although not identical, the
resemblance between the IRC metric and the metric generated by the simple forms of Eq. (7) is quite evident. This results in a geodesic interpolation path that is quite close to the MEP. Here, we
once again observe the crucial importance of the choice of metric. While the geodesic interpolation process serves to guarantee smoothness, the ability of geodesic interpolations to produce
chemically sensible paths depends on the metric. In fact, we can conjecture that the relatively good performance of IDPP (disregarding problematic convergence for many initial guess paths) is due to
its scaling method.
B. Geodesic interpolation from end points: Ring formation in dehydro Diels-Alder (DDA) reaction
Here, we take a closer look at our new geodesic interpolation (Algorithms 2 and 3) using a challenging numerical example: ring formation in the Dehydro Diels-Alder (DDA) reaction.^51 Discovered in
1895,^52 the high yield dimerization of phenylpropiolic acid 1 into 1-phenylnaphthalene-2,3-dicarboxylic acid anhydride 5 (see Scheme 1) is a well studied example of an intramolecular Diels-Alder
reaction. It is known that the reaction mechanism involves formation of anhydride (1 → 2), followed by a Diels-Alder step where the ring is formed (2 → 3 → 4), and finally proton exchange with the
solvent to yield the stable aromatic product (4 → 5).^53 Here, the geodesic interpolation procedure described in Sec. III is applied to construct the reaction path for the intramolecular Diels-Alder
cycloaddition process 2 → 3 → 4. This reaction is chosen because of its large size and concerted nature that make it quite challenging. The reaction path involves simultaneous large amplitude motion
of many atoms, including ring closure and large torsional displacements.
The initial and final geometries for interpolation, shown in Fig. 5, are optimized with UB3LYP/6-31g. Starting from the same endpoints, IDPP interpolation diverges and we were unable to correct this.
Nebterpolation requires a reasonable initial guess path (usually the reaction trajectory obtained from ab initio nanoreactor^6 dynamics) that was not available in this case. Finally, LST was not able
to generate a continuous path. We are therefore not able to provide comparisons with existing methods for this system because none can generate a path for comparison. The geodesic interpolation path
is constructed from these two end points using Algorithm 3.
Twenty images are used to describe the path, including the end points. The converged geodesic path has an arc length of 2.206 by Eq. (17). The lower bound for the length of the final path is 2.184,
while the upper bound is 2.213, which is deemed sufficiently tight. The bounds are calculated with Eqs. (15) and (16), respectively. The distances here are unitless due to the Morse-like scaling used
for the interatomic distance coordinates, and the length values correspond roughly to the total number of bonds formed and/or broken along the path. The scaled distance coordinates take values of ∼1
near the sum of covalent radii and values of ∼0 at large distances. Here, one bond breaking causes a scaled coordinate to change from near 1 to near 0, whereas one bond forming results in another
scaled coordinate to change from near 0 to near 1, while all other interatomic distances are kept roughly constant along the whole path, resulting in a length slightly larger than 2.
To examine the quality of the interpolation, NEB optimization was performed using the interpolated path as starting guess. The energy profile of the geodesic and the optimized MEP are shown in Fig. 6
. Although the barrier estimated through the geodesic is much higher than the transition state on the true MEP, the interpolation can still be considered quite successful given the complexity of the
reaction and the fact that the interpolation process uses solely geometric information. When used as an initial guess, the interpolation significantly accelerates the subsequent NEB processes. Note
that the energy profile from the geodesic path also qualitatively reproduces the shape of the true MEP, which indicates that Fukui’s metric is qualitatively well approximated.
The transition state on the optimized MEP and the corresponding point on the geodesic curve are compared in Fig. 7. Here, because the integrated lengths in Cartesian are almost the same for the MEP
and the geodesic interpolation path, we chose the point on the geodesic curve that is at the same integrated path length as the transition state on MEP. This point is also near the top of the
barrier. The similarity between the two geometries is evident.
C. Using geodesics as initial path for NEB calculations
One important application of reaction path smoothing and interpolation methods is to create a reasonable starting guess for MEP optimization algorithms such as NEB.^13 Here, we compare the
performance of a number of path smoothing methods when used as the initial guess for an NEB calculation. Reaction paths extracted from ab initio MD simulations were used, all of which are generated
by nanoreactor-driven reaction discovery^6 for the Urey-Miller experiment.^54 To ensure that the sample contains a large variety of reactions, each path corresponds to a distinct reaction, and a
total of 403 paths are used. The test set includes reactions with and without barriers, as discussed in more detail below. The NEB calculations are performed with DL-FIND^55 using UB3LYP/6-31G^*. All
electronic structure calculations are performed using the TeraChem package.^56,57 Solvation effects are modeled with C-PCM (a conductor-like polarizable continuum model), using a dielectric constant
appropriate for water (ϵ = 80).^58 The initial path interpolation methods tested here include LST and IDPP starting from a Cartesian linear interpolation path, the extended MD trajectory that
contains the reaction generated with the same process as the Nebterpolator method,^31 as well as geodesic, Nebterpolator, and IDPP paths starting from the MD paths for the respective optimization
processes. IDPP interpolations are performed with the ASE package.^59 The results are shown in Table I. For each method of generating initial reaction paths, we report the number of failed NEB runs,
the average number of single point energy and gradient evaluations needed to converge the NEB calculation, the average error by which each method overestimates the barrier, and the percentage of
cases for each method where the converged NEB path has the lowest barrier.
TABLE I.
Initial . Failed . Average number . RMS overestimation of . Lowest barrier .
path . runs . evaluations . barrier (kcal/mol) . success rate (%) .
MD path 4 121.9 >200 34
Geodesic 0 101.2 47.7 65
Nebterpolator 11 248.4 >200 51
LST 9 303.4 77.2 18
IDPP 66 165.2 166.4 12
IDPP(MD) 24 168.5 147.8 8
Initial . Failed . Average number . RMS overestimation of . Lowest barrier .
path . runs . evaluations . barrier (kcal/mol) . success rate (%) .
MD path 4 121.9 >200 34
Geodesic 0 101.2 47.7 65
Nebterpolator 11 248.4 >200 51
LST 9 303.4 77.2 18
IDPP 66 165.2 166.4 12
IDPP(MD) 24 168.5 147.8 8
Some clarification of the numbers in Table I is due. The number of failed paths includes cases where the NEB procedure failed to converge after 1000 iterations or where the paths are so distorted
that the self-consistent field (SCF) step in electronic structure calculation fails to converge. These paths also exhibit images with relative energies larger than 1000 kcal/mol that are clearly far
from the true MEP. The NEB-like process in IDPP is iterative and requires a starting guess path, where either a linear interpolation in Cartesian coordinates or a trajectory obtained from dynamics
can be supplied. For IDPP starting with Cartesian interpolations, this also includes 48 cases where the NEB-like process of IDPP itself did not converge. IDPP starting with MD trajectories did not
encounter the same failure. All failed runs were excluded when calculating the average number of iterations needed for NEB convergence. For barrier height comparisons, we start from initial paths
generated by each method and then compare the final converged path after NEB optimization to identify which method leads to MEPs with lower barriers. Because of the intrinsic accuracy of the method
we used, any methods with barriers less than 20 kJ/mol (4.8 kcal/mol) above the lowest barrier height are also considered to have generated a lowest barrier MEP.
We were surprised to find that most path smoothing methods did not perform as well as simply taking MD trajectories as a starting path for MEP search. IDPP produced disappointing results. While on
average, IDPP paths have lower barriers than the MD trajectories, and they are continuous unlike LST, the complex nonlinear optimization process often introduces highly strained paths that do not
correctly reflect the MEP and complicate the NEB calculation that follows. As mentioned before, in 48 cases, IDPP failed to even generate a path when starting from Cartesian interpolations because
the Cartesian interpolations often go through highly energetically disfavored regions causing the IDPP optimization process to diverge. Other methods did not encounter difficulty creating
interpolation paths, and there were no problems creating initial paths when using MD trajectories as starting guesses. The NEB optimization to obtain MEPs also tends to fail when starting from IDPP
generated paths because fragmented intermediate structures with multiple breaking bonds are sometimes generated by the IDPP process. The total number of failed paths is 66 when started from Cartesian
paths—significantly more than any other method. Using MD trajectories as starting guesses in the IDPP process reduces this number to 24, which is still quite high. This evidences instability of IDPP,
resulting from the complex and highly nonlinear NEB-like optimization process involved. We believe that this is a disadvantage of the method, especially for treating reactions with large amplitude
motions. It also exhibited a strong tendency to lead to MEPs with higher reaction barriers after the NEB optimization for both types of starting guesses. Using IDPP paths as the initial guess for
NEB-based MEP optimization also results in more ab initio evaluations compared to MD trajectories.
Surprisingly, LST paths using the same set of internal coordinates as IDPP are more stable and outperform IDPP on all measures even though most of the interpolated paths are discontinuous. The
average barrier is significantly lower than IDPP, which reflects the compromises that had to be made in IDPP to ensure continuity. Despite the ubiquitous occurrences of discontinuities, the NEB
optimization itself is sometimes capable of fixing discontinuities when they are not too severe. Note that while superior to IDPP paths, LST paths require many more ab initio evaluations in NEB
calculations than unsmoothed MD paths.
Nebterpolation yields significantly lower failure rates than IDPP on this test set. Compared to LST and IDPP, it also yields MEPs with lower energy barriers. However, it still fails more often than
just using the raw MD path without Nebterpolation, and on average, it takes more ab initio evaluations to converge because the discontinuity removal mechanism deteriorates the path. More
specifically, the Cartesian displacement term which draws both ends of the discontinuity together causes that segment of the path to resemble Cartesian interpolation, which is well-known to create
highly problematic paths. As a result, for a small number of paths, the Nebterpolator yields paths with barriers higher than 1 Hartree. This resulted in the significantly higher barrier number in
Table I and is responsible for the large number of evaluations needed and failed NEB runs. It performs very well when such problems do not occur and, on average, is likely to produce a low barrier
In general, for all three of the preexisting path finding methods surveyed here, (LST, IDPP, and Nebterpolator), it is crucial that the resulting smoothed paths be inspected to detect pathological
behavior. If these problematic paths are not manually removed, all three methods to smooth MD trajectories result in worse convergence behavior when the paths are used as starting guesses for MEP
optimization. This is a difficulty for automated reaction discovery and refinement procedures,^6–11 where it is critical that the MEP optimization can proceed unattended as much as possible.
Geodesic interpolation clearly outperforms all other considered interpolation methods on all measures for this test set. The formulation itself prevents discontinuities from occurring, obviating the
need for ad hoc procedures to remove discontinuities. This preserves the proper structure of the internal coordinates and, as long as a sensible set of coordinates is used, a good approximation to
the MEP can be obtained. It takes fewer evaluations to converge NEB calculations starting from the geodesic-generated path, and these are most likely to produce a low barrier MEP. Thus, the new
geodesic method presented here can serve as a good method for generating initial MEPs that will be further optimized by NEB.
As mentioned above, our test set includes reactions with and without barriers. The test set contains 124 barrierless reactions (out of 403), defined as having a reaction barrier lower than 0.063 kcal
/mol. For these reactions, geodesic interpolation produces 39 barrierless paths before optimization, allowing one to skip subsequent NEB calculations. Other methods are less successful at identifying
barrierless reactions before optimization. For comparison, LST produces 35 barrierless paths, Nebterpolator produced 24, direct IDPP produced 11, and IDPP from MD trajectories produced 5.
Interpolations performed in redundant internal coordinates are known to perform significantly better than those in Cartesian coordinates. Previous methods for creating interpolations in redundant
internal coordinates are investigated and found to experience difficulties that can be traced back to their treatment of redundant coordinates. To address this problem, careful analysis shows that
implicit metric changes are responsible for the performance improvement. By choosing the proper metric, interpolation paths similar to MEPs can be located directly in Cartesian coordinates. This
allows interpolations to be reformulated as an approximation to Fukui’s Intrinsic Reaction Coordinate (IRC) through an approximation to the geodesic formulation. The resulting method is equivalent to
finding the geodesic curve between the two end points, when the Euclidean metric of the internal coordinates roughly approximates Fukui’s IRC metric given in Eq. (4). The condition for the metric to
generate well-behaved approximations to an IRC is derived, and a simple coordinate choice that provides satisfactory performance was presented. We show that geodesic interpolations can be
conveniently computed in Cartesian coordinates by minimizing the path length through a numerical procedure that also provides bounds for the lengths of exact geodesics. Using a number of examples,
the geodesic approach is shown to be robust and stable in situations where previous methods fail. Geodesic interpolation also generates superior starting guesses for NEB calculations compared to
existing methods.
See supplementary material for geometries of molecules along reaction paths described in the main text and a Python reference code for performing the geodesic interpolations according to the
described algorithms.
This work was supported by the U.S. Office of Naval Research, Multidisciplinary University Research Initiatives (MURI) Program, Contract No. N00014-16-1-2557, the U.S. Office of Naval Research
Contract No. N00014-17-1-2875, a generous gift from Procter and Gamble Company, and by Toyota Motor Engineering and Manufacturing North America, Inc.
We briefly review basic concepts of Riemannian geometry, using Einstein summation notation throughout. A manifold is a topological space which locally resembles Euclidean space but whose global
structure can be more complicated. Consider the surface of the earth: locally it is isomorphic to $R2$ and we experience the world around us as flat. However, the shortest airline route between two
cities on different continents is a great circle which can be very different from the straight line on a two-dimensional map. The crucial innovation of Riemannian geometry is to separate the geometry
from the coordinate representation. We can then discuss the great circle route without reference to the conventions of latitude and longitude.
More formally M is a manifold if at every point p ∈ M, there exists a neighborhood U ⊂ M and a mapping $ϕU:U→Rn$, which is bijective, continuous, and with a continuous inverse. Each such mapping
$ϕU$ is called a coordinate map or chart, and the collection of charts {$ϕU$} that covers M is termed an atlas. In general, unless explicitly needed, the subscript U will be dropped.
Using local charts, one can construct tangent vectors and tangent planes. These are the natural generalizations of the tangent to a curved surface in $R3$. A tangent vector is defined as an
equivalence class of curves through point p which share the same derivatives when mapped into $Rn$ by $ϕ$. The tangent plane of a manifold M at point p, denoted as T[p]M, is the vector space of all
tangent vectors at point p. The tangent bundle, TM, is the set of all tangent planes of M and plays a significant role in the geometry of classical mechanics.
Given a basis {x[i]} of $ϕ(U)⊂Rn$, the image of a 1-dimensional curve γ: $R$ → M can be expressed as
where the symbol ◦ denotes function composition (f ◦ g)(x) ≡ f(g(x)).
With this, a basis e[i] of T[p]M is naturally defined, and a tangent vector X = u^ie[i] is the class of all curves γ(t) which satisfies γ(0) = p and
The tangent vector is therefore the derivative of the curve on the manifold, defined using the local chart to map the derivative from Euclidean space, and we write γ′(t), just like its Euclidean
Recalling that coordinate charts are bijective so have well defined inverses, tangent vectors act on functions $f:Rn→R$ via the chain rule
$γ′(t)( f)≡ddt[ f◦γ(t)]=ddt[( f◦ϕ−1)◦(ϕ◦γ(t))]=∂( f◦ϕ−1)∂xidxidt=ui∂( f◦ϕ−1)∂xi.$
Note that $f◦ϕ−1:Rn→R$. We have “pulled” the derivative back to $Rn$ where it is well defined. By this construction, tangent vectors correspond to partial derivatives, and it is conventional to
write the basis for T[p]M as
A vector field X is then a mapping that associates to each point p ∈ M a tangent vector X[p] ∈ T[p]M. Here, we note the crucial difference between a basis and a set of coordinates for the tangent
space. The basis definition in Eq. (A4) relies on a local chart, and every point p ∈ M can potentially have a different chart imbuing each tangent plane with a different basis. A choice of basis is
termed coordinate only if it is consistent amongst neighboring points, in the sense that for each p ∈ M, there exists a neighborhood U containing p with functions x^i: U → R which satisfy Eq. (A4).
This may at first seem trivial since we are only extending Eq. (A4) from a single point to a small neighborhood. However, even the ability to extend to infinitesimally small neighborhoods poses a
strong constraint that the Lie bracket between basis vectors vanish, i.e.,
Such as basis is called holonomic, and only a holonomic basis affords a set of coordinates. Condition (A5) is easy to understand by recognizing that partial derivatives must not depend on the order
of differentiation. Applying the right-hand side of Eq. (A5) to an arbitrary function yields
Equation (A5) therefore requires that the Hessian of f be symmetric. A non-holonomic basis is inconsistent in the sense that Eq. (A5) cannot hold true for all finite neighborhoods. When one tries to
estimate the coordinate values using such a basis, the result is path dependent and basis vectors do not correspond to partial differentiation. Although calculus in such a basis is still possible, it
requires explicit computation of the commutator, Eq. (A5), at each point p ∈ M which is significantly more difficult. In this work, all calculations are performed in Cartesian coordinates, which are
The definition of tangent vectors allows one to specify a notion of distance on the manifold. A metric tensor at a point p ∈ M is a functional g[p](X[p], Y[p]) which maps two tangent vectors X[p],Y[p
] ∈ T[p]M to a number. In general, g[p] is required to be symmetric, bilinear, and non-degenerate.
Further, a metric is Riemannian if it is smooth on M and positive definite, i.e.,
When a basis $∂∂xi$ for T[p]M is given, the metric is often expressed by its matrix elements in this basis as
A manifold equipped with a Riemannian metric is called a Riemannian manifold. The Riemannian metric generalizes the notion of an inner product from Euclidean spaces and allows one to measure
distances on the manifold with the following length element:
The length of a curve γ(t) = x^i(t)x[i] from A to B on M is
Here, the coordinate dependence on parameter x^i(t) is determined from γ through Eq. (A1).
The definition of path length in turn allows the definition of a geodesic as a curve whose arc length L is extremal, i.e., locally we have
A geodesic is the generalization of a straight line in Euclidean space. When the manifold is flat, its geodesics are straight lines, so this concept is especially helpful for our understanding of
interpolation methods.
In a holonomic basis, the condition for a curve γ(t) = x^i(t)x[i] to be a geodesic is
where $Γjki$ are the Christoffel symbols
An important property of a Riemannian manifold is that it can always be embedded in a Euclidean manifold with sufficiently high dimensionality, a result known as the Nash embedding theorem.^34,35
This means that any Riemannian manifold can be mapped into a curved hypersurface in a flat space, where the length of curves on the surface measured using the simple Euclidean metric of the higher
dimensional space matches the length of the same curve measured in the Riemannian manifold with a possibly non-Euclidean metric. As discussed in the main text, this has significant implications for
reaction paths and interpolations.
We denote q^(n) = q(x^(n)) and Δq^(n) = q^(n+1) − q^(n). Using Cauchy’s inequality, we have
Integrating from x^(n) to x^(n+1) on both sides yields
Since the left-hand side is simply $Δqn2$, we have
Equation (B5) provides a lower bound for the true length. In fact, from the second order Taylor expansion of the curve, using arc length parameterization, we have
Since we parameterized by arc length,
Multiply Eq. (B7) by $Δqk+q̇kΔs$ and then summing over indices gives
$ΔqkΔqk+q̇kΔs=q̇kΔsΔqk+q̇kΔs+12q̈kΔs2Δqk+q̇kΔs+ OΔs4,$
$Δq2+Δqkq̇kΔs=Δqkq̇kΔs+q̇2Δs2+12q̈kΔqkΔs2+ 12q̈kq̇kΔs3+OΔs4,$
Inserting Eq. (B7) and we get
Therefore, Eq. (B5) also gives an approximation for the path length through second order.
On the other hand, since the geodesic curve minimizes path length, the length of any given path in the manifold serves as an upper bound for the length of the geodesic. Here, we take a straight line
in Cartesian to measure the upper bound. The expression of a straight line between x^(n) and x^(n+1) is
where t ∈ [0,1]. With the internal coordinate metric, the length of such a Cartesian straight line provides the following upper bound
To evaluate the value and derivatives of this upper bound, the line segment is further subdivided into m subsegments. When the subsegments are sufficiently small, the length of each subsegment can be
accurately estimated with Eq. (B5). Ten to twenty subsegments can usually produce an accurate upper bound. In the case of division into m segments, the upper bound is
$ln≤∑k=1mq(m−k)x(n)+kx(n+1)m− q(m−k+1)x(n)+(k−1)x(n+1)m.$
It is easy to see that when m = 1, Eq. (B18) simplifies to Eq. (B5) and yields the lower bound, whereas when m → ∞, Eq. (B18) yields the upper bound. An efficient and robust approximation can be
obtained through a hybrid of the upper and lower bound approximations by setting m = 2,
$ln≈qxn−qxn+xn+12+ qxn+xn+12−qxn+1.$
Equation (B19) can be differentiated analytically, and the total path length can be minimized with any standard numerical minimization method, such as L-BFGS, to yield an approximate geodesic curve.
It is straightforward to verify that the length given by Eq. (B19) lies between the upper and lower bounds. The fact that it is larger than the lower bound prescribed by Eq. (B5) follows simply from
the triangle inequality.
On the other hand, if we perform the same analysis on the segment of the straight line between x^(n) and $xn+xn+12$, we have
$qxn−qxn+xn+12 ≤∫00.5∑i,jgijxi,n+1−xi,nxj,n+1−xj,ndt.$
The same can be done for the segment between $xn+xn+12$ and x^(n+1). Summing these two terms together yields
$qxn−qxn+xn+12+qxn+xn+12−qxn+1 ≤∫01∑i,jgijxi,n+1−xi,nxj,n+1−xj,ndt.$
APPENDIX C: PROOF THAT THE MAPPING q IS AN ISOMETRIC IMMERSION OF THE REDUCED CONFIGURATION SPACE (M, g) IN THE REDUNDANT INTERNAL COORDINATE SPACE(N, h)
Conventional geometry optimization or interpolation methods operate in the redundant internal coordinate space. More specifically, previous methods concern a feasible subspace on manifold (N, h),
where $N=RK$ is the space of the values of K internal coordinates, with a Euclidean metric
The manifold (N, h) has a simple structure and contains all numeric values for each internal coordinates, including those values of internal coordinates that have no counterpart in a realizable
molecular geometry (i.e., where the values of the internal coordinates are inconsistent). We further use manifold (M, g) to denote the reduced configuration space of the molecule. (M, g) spans only
realizable geometric configurations but has a more complex structure which results from the feasibility constraints. Here, we show that internal coordinate functions q facilitate an isometric
immersion of (M, g) into (N, h) if certain conditions on q are fulfilled. It is readily seen that the K coordinate functions q are a mapping from M to N. For q: M → N to be an immersion, it has to be
differentiable and its differential D[p]q: T[p]M → T[q(p)]N must be injective, i.e., one-to-one. Given a tangent vector $Xp=Xi∂∂xi∈TpM$ , the differential of the mapping can simply be obtained
through the chain rule
And this linear mapping is injective if and only if
Therefore, the internal coordinate functions q are an immersion of (M, g) into (N, h) if q is everywhere differentiable and nowhere underdetermined in M. In the case where q itself is also injective
(for example, where there are no periodic functions such as angles or torsions), q is an embedding. We note that although problems of periodicity and non-uniqueness arise in the conventional
treatments in N, the reduced configuration space M does not have any such redundancy. Periodicity and global multi-valuedness of the coordinate functions does not affect the analysis in (M, g) either
since the metric only requires local derivative information.
The differential of q given in Eq. (C2) maps a vector in T[p]M into a vector in T[q(p)]N, which is termed a pushforward in differential geometry. Correspondingly, its dual mapping provides a pullback
from the cotangent space $Tq(p)*N$ to $Tp*M$. The cotangent space is spanned by differentials, and given a cotangent vector $dfp=fidqi∈Tq(p)*N$, the mapping is
For any two tangent vectors $Xp=Xi∂∂xi$ and $Yp=Yi∂∂xi$ in T[p]M from the metric, we have
Therefore, the mapping q preserves the metric. With this particular choice of metric, q is an isometric immersion of (M, g) into (N, h). Therefore, when N is not affected by multi-valuedness of
internal coordinates, M simply describes the intrinsic structure of the feasible submanifold of N. When internal coordinates are multivalued, M is still smooth and single valued, and any unique
subset of the feasible submanifold of N can be mapped onto M.
In the context of geometry optimization, a successful method for obtaining linearly independent subspaces from the redundant internal coordinates is the G matrix projection procedure. So far, it has
not been applied to non-local problems such as interpolation because of the local nature of the method. We prove that this process is equivalent to the choice of a specific basis set for T[p]M and
that the interpolation process described above is in fact equivalent to linear interpolation on the G matrix projected basis.
The G matrix projection procedure is first briefly reviewed here. At a particular point, the G matrix is constructed from Wilson’s B matrix^18,19
$G=BBT, Gij=BikBjk=∂qi∂xk∂qj∂xk.$
The similarity with the matrix representation of the metric tensor is evident. In fact, the metric of the internal coordinate manifold is given by g = B^TB. An eigenvalue decomposition of G is then
where the diagonal matrix Λ contains all nonzero eigenvalues, K are the corresponding eigenvectors, and L spans the null space. Then, a non-redundant set of internal coordinates are constructed from
the redundant coordinates q using K
Vectors K can be further used to construct transformations between this non-redundant internal coordinate system and Cartesian coordinates.^22 The similarity between the matrix of the metric tensor
and the G matrix is not superficial. With any basis x[i], one can also perform an eigenvalue decomposition of the matrix representation of the metric tensor g
$gjkJki=λiJji, JkiJkj=δij.$
Note that index i is not being summed over in Eq. (D4). Because of the requirement in Eq. (C3), all dim(M) eigenvalues are positive. Because the Cartesian basis is used, six of the eigenvalues are 0,
which correspond to translation and rotational motions that are not in M and are therefore not included in the metric. The eigenvectors λ[i] in Eq. (D4) are deeply connected with matrix K in the G
matrix projection method. In fact, if we perform singular value decomposition (SVD) of B^20,26
where U and V are orthogonal and S is the diagonal matrix containing all singular values of B, then we have
Note that although SS^Tand S^TS are not of the same dimensionality, they are both diagonal and contain the same nonzero diagonal elements which equal to the squared singular values of B and only
differ in the number of zero diagonal elements.
That is, U contains all eigenvectors of G while V contains all eigenvectors of g. Therefore one can choose the sign and ordering convention in J and K so that K = U and J = V[.] For the nonzero block
of S, we have
We define a basis of the cotangent space $Tp*M$ as
Again, in Eq. (D11), the index i is not summed over. Since g is bilinear, it is readily seen that the basis obtained here, up to sign and ordering, is independent of the original choice of basis.
This basis is, in fact, the nonredundant basis constructed in the G matrix projection procedure, expressed intrinsically in manifold M instead of as an embedded structure in $N=RK$. To see this, we
first write Eq. (D3) in a more precise manner in the cotangent space at point p
Then apply the pullback Eq. (C4) to this equation and obtain
Then we insert Eq. (D10) and obtain
Comparing this with Eq. (D11), we have
In other words, the G matrix projected non-redundant basis dQ^i is the basis of Eq. (D11) on (M, g) embedded in (N, h). However, while dQ^i forms a basis for the cotangent space and thereby naturally
induces a dual basis Q[i] in the tangent plane, it is not holonomic and cannot form a coordinate system. As noted earlier, a set of basis vectors of a tangent space X[i] is holonomic when the Lie
bracket vanishes, i.e., [X[i],X[j]] = 0.
For an arbitrary set of redundant internal coordinates q, this means that there exists no coordinate system that yields differentials satisfying Eq. (D12). Such a conclusion is to be expected since
this basis is orthonormal (because columns of K are orthonormal). Only a flat (Euclidean) manifold possess basis vector fields which are both orthonormal and holonomic. In fact, a Euclidean space is
defined as a space with a holonomic basis that has the Euclidean metric g[ij] = δ[ij], i.e., orthonormal. More specifically, such a manifold is flat because in a holonomic basis the Christoffel
symbols can be written in the form of Eq. (A12), which vanish if the basis stays orthonormal in a finite neighborhood
As a result, the scalar curvature also vanish
This is only possible when the coordinate system is non-redundant. In the redundant case, the feasible submanifold is, in general, not flat and thus Q[i] is generally not holonomic. As a result,
should one try to obtain the value of Q[i] by integrating Eq. (D12), one would find that the integral is path dependent, and it is impossible to assign a unique coordinate value for geometric
configurations in a consistent way.
R. A.
, “
On the analytical mechanics of chemical reactions. Quantum mechanics of linear collisions
J. Chem. Phys.
H. S.
, “
Large tunnelling corrections in chemical reaction rates
Adv. Chem. Phys.
R. A.
, “
Unimolecular dissociations and free radical recombination reactions
J. Chem. Phys.
K. J.
M. C.
, “
Development of transition-state theory
J. Phys. Chem.
, “
The activated complex in chemical reactions
J. Chem. Phys.
V. S.
, and
T. J.
, “
Discovering chemistry with an ab initio nanoreactor
Nat. Chem.
, “
An automated method to find transition states using chemical dynamics simulations
J. Comput. Chem.
A. L.
A. J.
, and
P. M.
, “
Methods for exploring reaction space in molecular systems
Wiley Interdiscip. Rev.: Comput. Mol. Sci.
, and
, “
Implementation and performance of the artificial force induced reaction method in the GRRM17 program
J. Comput. Chem.
, and
, “
Intrinsic reaction coordinate: Calculation, bifurcation, and automated search
Int. J. Quantum Chem.
Z. W.
A. J.
, and
J. K.
, “
To address surface reaction network complexity using scaling relations machine learning and DFT calculations
Nat. Commun.
T. A.
W. N.
, “
The synchronous-transit method for determining reaction pathways and locating molecular transition states
Chem. Phys. Lett.
, “
Quantum and thermal effects in H2 dissociative adsorption: Evaluation of free energy barriers in multidimensional quantum systems
Phys. Rev. Lett.
, and
, “
Optimization methods for finding minimum energy paths
J. Chem. Phys.
, and
, “
An efficient algorithm for finding the minimum energy path for cation migration in ionic materials
J. Chem. Phys.
, and
J. E.
, “
Systematic ab initio gradient calculation of molecular geometries, force constants, and dipole moment derivatives
J. Am. Chem. Soc.
X. F.
P. W.
, and
, “
The calculation of ab initio molecular geometries- efficient optimization by natural internal coordinates and empirical correction by offset forces
J. Am. Chem. Soc.
E. B.
J. C.
P. C.
, and
B. R.
, “
Molecular vibrations: The theory of infrared and Raman vibrational spectra
J. Electrochem. Soc.
E. P.
Introduction to Mechanics of Solids
Englewood Cliffs, NJ
, “
Geometry optimization in redundant internal coordinates
J. Chem. Phys.
P. Y.
H. B.
, and
M. J.
, “
Using redundant internal coordinates to optimize equilibrium geometries and transition states
J. Comput. Chem.
, “
Constrained optimization in delocalized internal coordinates
J. Comput. Chem.
S. R.
A. J.
, and
, “
Linear scaling geometry optimisation and transition state search in hybrid delocalised internal coordinates
Phys. Chem. Chem. Phys.
E. B.
, “
A method of obtaining the expanded secular equation for the vibration frequencies of a molecule
J. Chem. Phys.
E. B.
, “
Some mathematical methods for the study of molecular vibrations
J. Chem. Phys.
K. C.
M. J. T.
, and
M. A.
, “
Polyatomic molecular potential energy surfaces by interpolation in local internal coordinates
J. Chem. Phys.
D. G.
, “
Shape manifolds, procrustean metrics, and complex projective spaces
Bull. London Math. Soc.
M. A.
D. F.
, “
Implications of rotation–inversion–permutation invariance for analytic molecular potential energy surfaces
J. Chem. Phys.
V. L.
E. B.
, “
Invariant theory
,” in
Algebraic Geometry IV: Linear Algebraic Groups Invariant Theory
, edited by
A. N.
I. R.
Springer Berlin Heidelberg
Berlin, Heidelberg
), p.
, and
, “
Improved initial guess for minimum energy path calculations
J. Chem. Phys.
L. P.
R. T.
V. S.
, and
T. J.
, “
Automated discovery and refinement of reactive molecular dynamics pathways
J. Chem. Theory Comput.
, “
Intrinsic field theory of chemical reactions
Theor. Chim. Acta
, and
J. M.
, “
Applications of analytic and geometry concepts of the theory of calculus of variations to the intrinsic reaction coordinate model
Mol. Phys.
J. M.
, “
The reaction path intrinsic reaction coordinate method and the Hamilton–Jacobi theory
J. Chem. Phys.
, “
Chemical reaction paths and calculus of variations
Theor. Chem. Acc.
A. B.
H. B.
, “
Path optimization by a variational reaction coordinate method. I. Development of formalism and algorithms
J. Chem. Phys.
A. B.
H. B.
, “
Path optimization by a variational reaction coordinate method. II. Improved computational efficiency through internal coordinates and surface interpolation
J. Chem. Phys.
N. P.
, “
On coordinate transformations in steepest descent path and stationary point locations
Int. J. Quantum Chem.
, “
Analysis of the concept of minimum energy path on the potential energy surface of chemically reacting systems
Theor. Chim. Acta
, “
Differential geometry of chemically reacting systems
Theor. Chim. Acta
Introduction to Tensor Analysis and the Calculus of Moving Surfaces
New York
, “
Formulation of the reaction coordinate
J. Phys. Chem.
, “
C1 isometric imbeddings
Ann. Math.
, “
The imbedding problem for riemannian manifolds
Ann. Math.
R. G.
, and
J. M.
, “
New alternative to the Dunham potential for diatomic molecules
J. Chem. Phys.
J. A.
, “
Computing geodesic paths on manifolds
Proc. Natl. Acad. Sci. U. S. A.
J. A.
, “
Fast marching methods
SIAM Rev.
J. A.
, “
Level set and fast marching methods in image processing and computer vision
,” in
Proceedings of 3rd IEEE International Conference on Image Processing
), p.
J. V.
, and
, “
The path to efficiency: Fast marching method for safer, more efficient mobile robot trajectories
IEEE Rob. Automation Mag.
N. J.
, and
, “
Geometric modeling in shape space
ACM Trans. Graphics
, “
The dehydro-Diels−Alder reaction
Chem. Rev.
J. E.
, “
Ueber die Einwirkung von Essigsureanhydrid auf Suren der Acetylenreihe
Ber. Deu. Chem. Gessel.
H. W.
E. M.
, and
B. J.
, “
Role of solvent hydrogens in the dehydro Diels-Alder reaction
J. Org. Chem.
S. L.
H. C.
, “
Organic compound synthes on the primitive eart
J. M.
T. W.
, and
, “
DL-FIND: An open-source geometry optimizer for atomistic simulations
J. Phys. Chem. A
A. V.
I. S.
, and
T. J.
, “
Generating efficient quantum chemistry codes for Novel architectures
J. Chem. Theory Comput.
I. S.
T. J.
, “
Quantum chemistry on graphical processing units. III. Analytical energy gradients, geometry optimization, and first principles molecular dynamics
J. Chem. Theory Comput.
H. J.
, and
T. J.
, “
Quantum chemistry for solvated molecules on graphical processing units using polarizable continuum models
J. Chem. Theory Comput.
A. H.
J. J.
I. E.
M. N.
E. D.
P. C.
P. B.
J. R.
E. L.
J. B.
K. S.
, and
K. W.
, “
The atomic simulation environment—A Python library for working with atoms
J. Phys.: Condens. Matter
© 2019 Author(s).
|
{"url":"https://pubs.aip.org/aip/jcp/article/150/16/164103/198363/Geodesic-interpolation-for-reaction-pathways","timestamp":"2024-11-03T10:42:35Z","content_type":"text/html","content_length":"606821","record_id":"<urn:uuid:618bd63f-e2f5-443e-9a3c-7fb5cfcea5db>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00657.warc.gz"}
|
How does a linear program solve a problem?
It solves any linear program; it detects redundant constraints in the problem formulation; it identifies instances when the objective value is unbounded over the feasible region; and it solves
problems with one or more optimal solutions. The method is also self-initiating.
Is the problem of linear programming in canonical form?
A problem with this structure is said to be in canonical form. This formulation might appear to be quite limited and restrictive; as we will see later, however, any linear programming problem can be
transformed so that it is in canonical form. Thus, the following discussion is valid for linear programs in general.
How is the simplex method used to solve linear programs?
This procedure, called the simplex method, proceeds by moving from one feasible solution to another, at each step improving the value of the objective function. Moreover, the method terminates after
a finite number of such transitions.
What’s the optimal value for the linear program Z?
Since their coefficients in the objective function are negative, if either x3or x4is positive, z will be less than 20. Thus the maximum value for z is obtained when x3= x4= 0. To summarize this
observation, we state the: Optimality Criterion.
Which is the feasible region of the linear programming problem?
Any v combination of (x 1;x 2) on the line 3x 1+ x 2= 120 for x 12[16;35] will provide the largest possible value z(x 1;x 2) can take in the feasible region S.20 2.4 A Linear Programming Problem with
no solution. The feasible region of the linear programming problem is empty; that is, there are no values for x 1and x 2
What are the lecture notes for linear programming?
Linear Programming Lecture Notes Linear Programming: Penn State Math 484 Lecture Notes Version 1.8.3 Christopher Gri\n « 2009-2014 Licensed under aCreative Commons Attribution-Noncommercial-Share
Alike 3.0 United States License
|
{"url":"https://corporatetaxratenow.com/how-does-a-linear-program-solve-a-problem/","timestamp":"2024-11-08T15:26:48Z","content_type":"text/html","content_length":"41445","record_id":"<urn:uuid:9f20042b-dec9-4d3b-bcd2-d608ff7755f9>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00876.warc.gz"}
|
FEXT paths are no longer disturbance: Now they are
exploited as useful signal parts. Therefore the trans-
mission quality is improved compared to the SISO-
OFDM case (OFDM transmission over a (fictive) per-
fectly shielded single wire pair). Similar results are
known from MIMO radio transmission with multi-
ple transmit and/or receive antennas, where multi-
ple transmission paths are exploited, too (Raleigh and
Cioffi, 1998; Raleigh and Jones, 1999).
The results show that under severe FEXT influ-
ence it is worth taking the FEXT signal paths into
account (Fig. 3). At small FEXT couplings no sig-
nificant gains are possible by MIMO-OFDM without
PA compared to a perfectly shielded wire pair (SISO-
OFDM), because the FEXT coupled signal parts are
very small. The results in Fig. 3 show further the po-
tential of appropriate power allocation strategies. The
absolut achievable gains depend on the actual cable
type and on the isolation of the wire pairs.
In this contribution, the practical exploitation of the
FEXT paths for improving the signal transmission
quality was investigated in terms of an exemplary
multicarrier transmission system on a symmetric cop-
per cable. It was shown, that the MIMO-OFDM cable
transmission enables gains in the BER performance
especially under severe FEXT influence. Thereby it
could be shown that power allocation is necessary to
achieve a minimum bit-error rate. In the exemplary
system considered here some restrictions were made,
which directly lead to some open points for further
investigations: In order to use MIMO-OFDM for ca-
bles of any length the most important open point is
the optimization of bit loading in combination with
the power allocation in the MIMO-OFDM context.
Ahrens, A. and Lange, C. (2006). Transmit Power Alloca-
tion in SVD Equalized Multicarrier Systems. Inter-
national Journal of Electronics and Communications
U), 60. accepted for publication.
Aslanis, J. T. and Cioffi, J. M. (1992). Achievable Infor-
mation Rates on Digital Subscriber Loops: Limiting
Information Rates with Crosstalk Noise. IEEE Trans-
actions on Communications, 40(2):361–372.
Bahai, A. R. S. and Saltzberg, B. R. (1999). Multi-Carrier
Digital Communications – Theory and Applications of
OFDM. Kluwer Academic/Plenum Publishers, New
York, Boston, Dordrecht, London, Moskau.
Bingham, J. A. C. (2000). ADSL, VDSL, and Multicarrier
Modulation. Wiley, New York.
Corless, R. M., Gonnet, G. H., Hare, D. E. G., Jeffrey, D. J.,
and Knuth, D. E. (1996). On the Lambert W Function.
Advances in Computational Mathematics, 5:329–359.
Hanzo, L., Webb, W. T., and Keller, T. (2000). Single- and
Multi-carrier Quadrature Amplitude Modulation. Wi-
ley, Chichester, New York, 2 edition.
Honig, M. L., Steiglitz, K., and Gopinath, B. (1990). Multi-
channel Signal Processing for Data Communications
in the Presence of Crosstalk. IEEE Transactions on
Communications, 38(4):551–558.
Jang, J. and Lee, K. B. (2003). Transmit Power Adapta-
tion for Multiuser OFDM Systems. IEEE Journal on
Selected Areas in Communications, 21(2):171–178.
Kalet, I. (1987). Optimization of Linearly Equalized
QAM. IEEE Transactions on Communications,
Kovalyov, I. P. (2004). SDMA for Multipath Wireless Chan-
nels. Springer, New York.
Kreß, D. and Krieghoff, M. (1973). Elementare Ap-
proximation und Entzerrung bei der
Ubertragung von
uber Koaxialkabel. Nachrichtentech-
nik Elektronik, 23(6):225–227.
Kreß, D., Krieghoff, M., and Gr
afe, W.-R. (1975).
utekriterien bei der
Ubertragung digitaler Signale. In
XX. Internationales Wissenschaftliches Kolloquium,
Nachrichtentechnik, pages 159–162, Ilmenau. Tech-
nische Hochschule.
Krongold, B. S., Ramchandran, K., and Jones, D. L.
(2000). Computationally Efficient Optimal Power Al-
location Algorithms for Multicarrier Communications
Systems. IEEE Transactions on Communications,
Lange, C. and Ahrens, A. (2005). Channel Capacity of
Twisted Wire Pairs in Multi-Pair Symmetric Copper
Cables. In Fifth International Conference on Informa-
tion, Communications and Signal Processing (ICICS),
pages 1062–1066, Bangkok (Thailand).
Park, C. S. and Lee, K. B. (2004). Transmit Power Alloca-
tion for BER Performance Improvement in Multicar-
rier Systems. IEEE Transactions on Communications,
Proakis, J. G. (2000). Digital Communications. McGraw-
Hill, New York, 4 edition.
Raleigh, G. G. and Cioffi, J. M. (1998). Spatio-Temporal
Coding for Wireless Communication. IEEE Transac-
tions on Communications, 46(3):357–366.
Raleigh, G. G. and Jones, V. K. (1999). Multivariate
Modulation and Coding for Wireless Communication.
IEEE Journal on Selected Areas in Communications,
Valenti, C. (2002). NEXT and FEXT Models for Twisted-
Pair North American Loop Plant. IEEE Journal on
Selected Areas in Communications, 20(5):893–900.
van Nee, R. and Prasad, R. (2000). OFDM for wireless
Multimedia Communications. Artech House, Boston
and London.
|
{"url":"http://www.scitepress.net/PublishedPapers/2006/15680/pdf/index.html","timestamp":"2024-11-05T18:31:46Z","content_type":"application/xhtml+xml","content_length":"202533","record_id":"<urn:uuid:160cbe76-da0c-4fdb-86e2-c8dd3607478a>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00295.warc.gz"}
|
Problems & Exercises
9.2 The Second Condition for Equilibrium
(a) When opening a door, you push on it perpendicularly with a force of 55.0 N at a distance of 0.850 m from the hinges. What torque are you exerting relative to the hinges? (b) Does it matter if you
push at the same height as the hinges?
When tightening a bolt, you push perpendicularly on a wrench with a force of 165 N at a distance of 0.140 m from the center of the bolt. (a) How much torque are you exerting in newton × meters
(relative to the center of the bolt)? (b) Convert this torque to footpounds.
Two children push on opposite sides of a door during play. Both push horizontally and perpendicular to the door. One child pushes with a force of 17.5 N at a distance of 0.600 m from the hinges, and
the second child pushes at a distance of 0.450 m. What force must the second child exert to keep the door from moving? Assume that friction is negligible.
Use the second condition for equilibrium $(net τ = 0)(net τ = 0)$ to calculate $FpFp$ in Example 9.1, employing any data given or solved for in part (a) of the example.
Repeat the seesaw problem in Example 9.1 with the center of mass of the seesaw 0.160 m to the left of the pivot (on the side of the lighter child) and assuming a mass of 12 kg for the seesaw. The
other data given in the example remain unchanged. Explicitly show how you follow the steps in the Problem-solving Strategy for static equilibrium.
9.3 Stability
Suppose a horse leans against a wall, as in Figure 9.32. Calculate the force exerted on the wall, assuming that force is horizontal, while using the data in the schematic representation of the
situation. Note that the force exerted on the wall is equal in magnitude and opposite in direction to the force exerted on the horse, keeping it in equilibrium. The total mass of the horse and rider
is 500 kg. Take the data to be accurate to three digits.
Two children of masses 20 kg and 30 kg sit balanced on a seesaw, with the pivot point located at the center of the seesaw. If the children are separated by a distance of 3 m, at what distance from
the pivot point is the small child sitting in order to maintain the balance?
(a) Calculate the magnitude and direction of the force on each foot of the horse in Figure 9.32 (two are on the ground), assuming the center of mass of the horse is midway between the feet. The total
mass of the horse and rider is 500 kg. (b) What is the minimum coefficient of friction between the hooves and the ground? Note that the force exerted by the wall is horizontal.
A person carries a plank of wood 2 m long, with one hand pushing down on it at one end with a force $F1F1 size 12{F rSub { size 8{1} } } {}$ and the other hand holding it up at 50 cm from the end of
the plank with force $F2F2 size 12{F rSub { size 8{2} } } {}$. If the plank has a mass of 20 kg and its center of gravity is at the middle of the plank, what are the magnitudes of the forces $F1F1$
and $F2F2 size 12{F rSub { size 8{2} } } {}$?
A 17-m-high and 11-m-long wall under construction and its bracing are shown in Figure 9.33. The wall is in stable equilibrium without the bracing but can pivot at its base. Calculate the force
exerted by each of the 10 braces if a strong wind exerts a horizontal force of 650 N on each square meter of the wall. Assume that the net force from the wind acts at a height halfway up the wall and
that all braces exert equal forces parallel to their lengths. Neglect the thickness of the wall.
(a) What force must be exerted by the wind to support a 2.50-kg chicken in the position shown in Figure 9.34? (b) What is the ratio of this force to the chicken's weight? (c) Does this support the
contention that the chicken has a relatively stable construction?
Suppose the weight of the drawbridge in Figure 9.35 is supported entirely by its hinges and the opposite shore, so that its cables are slack. (a) What fraction of the weight is supported by the
opposite shore if the point of support is directly beneath the cable attachments? (b) What is the direction and magnitude of the force the hinges exert on the bridge under these circumstances? The
mass of the bridge is 2,500 kg.
Suppose a 900-kg car is on the bridge in Figure 9.35 with its center of mass halfway between the hinges and the cable attachments. The bridge is supported by the cables and hinges only. (a) Find the
force in the cables. (b) Find the direction and magnitude of the force exerted by the hinges on the bridge.
A sandwich board advertising sign is constructed as shown in Figure 9.36. The sign's mass is 8 kg. (a) Calculate the tension in the chain, assuming no friction between the legs and the sidewalk. (b)
What force is exerted by each side on the hinge?
(a) What minimum coefficient of friction is needed between the legs and the ground to keep the sign in Figure 9.36 in the position shown if the chain breaks? (b) What force is exerted by each side on
the hinge?
A gymnast is attempting to perform splits. From the information given in Figure 9.37, calculate the magnitude and direction of the force exerted on each foot by the floor.
9.4 Applications of Statics, Including Problem-Solving Strategies
To get up on the roof, a person (mass 70 kg) places a 6.00-m aluminum ladder (mass 10 kg) against the house on a concrete pad with the base of the ladder 2 m from the house. The ladder rests against
a plastic rain gutter, which we can assume to be frictionless. The center of mass of the ladder is 2 m from the bottom. The person is standing 3 m from the bottom. What are the magnitudes of the
forces on the ladder at the top and bottom?
In Figure 9.22, the cg of the pole held by the pole vaulter is 2 m from the left hand, and the hands are 0.700 m apart. Calculate the force exerted by (a) his right hand and (b) his left hand. (c) If
each hand supports half the weight of the pole in Figure 9.20, show that the second condition for equilibrium $(netτ=0)(netτ=0)$ is satisfied for a pivot other than the one located at the center of
gravity of the pole. Explicitly show how you follow the steps in the Problem-solving Strategy for static equilibrium described above.
9.5 Simple Machines
What is the mechanical advantage of a nail puller—similar to the one shown in Figure 9.24 —where you exert a force $45 cm45 cm size 12{"45"`"cm"} {}$ from the pivot and the nail is $1.8 cm1.8 cm size
12{1 "." 8`"cm"} {}$ on the other side? What minimum force must you exert to apply a force of $1,250 N1,250 N size 12{"1250"`N} {}$ to the nail?
Suppose you needed to raise a 250-kg mower a distance of 6 cm above the ground to change a tire. If you had a 2-m long lever, where would you place the fulcrum if your force was limited to 300 N?
(a) What is the mechanical advantage of a wheelbarrow, such as the one in Figure 9.25, if the center of gravity of the wheelbarrow and its load has a perpendicular lever arm of 5.50 cm, while the
hands have a perpendicular lever arm of 1.02 m? (b) What upward force should you exert to support the wheelbarrow and its load if their combined mass is 55 kg? (c) What force does the wheel exert on
the ground?
A typical car has an axle with a $1.10 cm1.10 cm size 12{1 "." "10"`"cm"} {}$ radius driving a tire with a radius of $27.5 cm27.5 cm size 12{"27" "." 5`"cm"} {}$. What is its mechanical advantage,
assuming the very simplified model in Figure 9.26(b)?
What force does the nail puller in Exercise 9.19 exert on the supporting surface? The nail puller has a mass of 2.10 kg.
If you used an ideal pulley of the type shown in Figure 9.27(a) to support a car engine of mass $115 kg115 kg size 12{"115"`"kg"} {}$, (a) what would be the tension in the rope? (b) What force must
the ceiling supply, assuming you pull straight down on the rope? Neglect the pulley system's mass.
Repeat Exercise 9.24 for the pulley shown in Figure 9.27(c), assuming you pull straight up on the rope. The pulley system's mass is $7 kg7 kg size 12{7 "." "00"`"kg"} {}$.
9.6 Forces and Torques in Muscles and Joints
Verify that the force in the elbow joint in Example 9.4 is 407 N, as stated in the text.
Two muscles in the back of the leg pull on the Achilles tendon, as shown in Figure 9.38. What total force do they exert?
The upper leg muscle (quadriceps) exerts a force of 1,250 N, which is carried by a tendon over the kneecap (the patella) at the angles shown in Figure 9.39. Find the direction and magnitude of the
force exerted by the kneecap on the upper leg bone (the femur).
A device for exercising the upper leg muscle is shown in Figure 9.40, together with a schematic representation of an equivalent lever system. Calculate the force exerted by the upper leg muscle to
lift the mass at a constant speed. Explicitly show how you follow the steps in the Problem-solving Strategy for static equilibrium in Applications of Statistics, Including Problem-Solving Strategies.
A person working at a drafting board may hold her head as shown in Figure 9.41, requiring muscle action to support the head. The three major acting forces are shown. Calculate the direction and
magnitude of the force supplied by the upper vertebrae $FVFV size 12{F rSub { size 8{V} } } {}$ to hold the head stationary, assuming that this force acts along a line through the center of mass, as
do the weight and muscle force.
We analyzed the biceps muscle example with the angle between forearm and upper arm set at $90º 90º$. Using the same numbers as in Example 9.4, find the force exerted by the biceps muscle when the
angle is $120º 120º$ and the forearm is in a downward position.
Even when the head is held erect, as in Figure 9.42, its center of mass is not directly over the principal point of support (the atlanto-occipital joint). The muscles at the back of the neck should
therefore exert a force to keep the head erect. That is why your head falls forward when you fall asleep in the class. (a) Calculate the force exerted by these muscles using the information in the
figure. (b) What is the force exerted by the pivot on the head?
A 75-kg man stands on his toes by exerting an upward force through the Achilles tendon, as in Figure 9.43. (a) What is the force in the Achilles tendon if he stands on one foot? (b) Calculate the
force at the pivot of the simplified lever system shown—that force is representative of forces in the ankle joint.
A father lifts his child as shown in Figure 9.44. What force should the upper leg muscle exert to lift the child at a constant speed?
Unlike most of the other muscles in our bodies, the masseter muscle in the jaw, as illustrated in Figure 9.45, is attached relatively far from the joint, enabling large forces to be exerted by the
back teeth. (a) Using the information in the figure, calculate the force exerted by the lower teeth on the bullet. (b) Calculate the force on the joint.
Integrated Concepts
Suppose we replace the 4-kg book in Exercise 9.31 of the biceps muscle with an elastic exercise rope that obeys Hooke's Law. Assume its force constant $k=600N/mk=600N/m size 12{k="600"`"N/m"} {}$.
(a) How much is the rope stretched (past equilibrium) to provide the same force $FBFB size 12{F rSub { size 8{B} } } {}$ as in this example? Assume the rope is held in the hand at the same location
as the book. (b) What force is on the biceps muscle if the exercise rope is pulled straight up so that the forearm makes an angle of $25º25º size 12{"25"°} {}$ with the horizontal? Assume the biceps
muscle is still perpendicular to the forearm.
(a) What force should the woman in Figure 9.46 exert on the floor with each hand to do a push-up? Assume that she moves up at a constant speed. (b) The triceps muscle at the back of her upper arm has
an effective lever arm of 1.75 cm, and she exerts force on the floor at a horizontal distance of 20 cm from the elbow joint. Calculate the magnitude of the force in each triceps muscle, and compare
it to her weight. (c) How much work does she do if her center of mass rises 0.240 m? (d) What is her useful power output if she does 25 pushups in one minute?
You have just planted a sturdy 2-m-tall palm tree in your front lawn for your mother's birthday. Your brother kicks a 500-g ball, which hits the top of the tree at a speed of 5 m/s and stays in
contact with it for 10 s. The ball falls to the ground near the base of the tree, and the recoil of the tree is minimal. (a) What is the force on the tree? (b) The length of the sturdy section of the
root is only 20 cm. Furthermore, the soil around the roots is loose, and we can assume that an effective force is applied at the tip of the 20 cm length. What is the effective force exerted by the
end of the tip of the root to keep the tree from toppling? Assume the tree will be uprooted rather than bend. (c) What could you have done to ensure that the tree does not uproot easily?
Unreasonable Results
Suppose two children are using a uniform seesaw that is 3 m long and has its center of mass over the pivot. The first child has a mass of 30 kg and sits 1.40 m from the pivot. (a) Calculate where the
second 18 kg child must sit to balance the seesaw. (b) What is unreasonable about the result? (c) Which premise is unreasonable, or which premises are inconsistent?
Construct Your Own Problem
Consider a method for measuring the mass of a person's arm in anatomical studies. The subject lies on her back and extends her relaxed arm to the side, and two scales are placed below the arm. One is
placed under the elbow, and the other under the back of her hand. Construct a problem in which you calculate the mass of the arm and find its center of mass based on the scale readings and the
distances of the scales from the shoulder joint. You must include a free body diagram of the arm to direct the analysis. Consider changing the position of the scale under the hand to provide more
information, if needed. You may wish to consult references to obtain reasonable mass values.
|
{"url":"https://texasgateway.org/resource/problems-exercises-7?binder_id=78551&book=79096","timestamp":"2024-11-11T00:52:15Z","content_type":"text/html","content_length":"166291","record_id":"<urn:uuid:762d39b6-e7d0-420b-9a98-b282534293d1>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00322.warc.gz"}
|
You cannot walk straight when the road bends.
— Romani proverb
Miroslav Kramar
Assistant Professor
David and Judi Proctor Department of Mathematics
University of Oklahoma
Physical Sciences Center (PHSC)
601 Elm Avenue, Room 423
Office: Room 1117
Free University Amsterdam
Supervisor: R. Vandervorst
Supervisor: K. Mischaikow
Assistant Professor
Research interests
I have always been interested in the mechanisms by which nature creates beautiful and complicated patterns and then develops them in front of our eyes. This led me to using analytical and topological
methods for exploring invariant sets of non-linear differential equations that are thought to govern some of these phenomena.
Nature, however, does not reveal itself in the form of differential equations directly, but rather as a point cloud collected by experimentalists. Today there is a tremendous amount of data but no
universal method for understanding it. In my current research, I use methods of algebraic topology and the power of computers to analyze large and potentially high dimensional data sets. An integral
part of my research is developing methods that allow a meaningful comparison of experimental and simulated data so that the similarities as well as the differences between them can be better
In order to fully appreciate the dynamical mechanisms of nature, we need to treat our data as a time series. We apply topological methods and the theory of dynamical systems to study these time
series. Often, the most interesting dynamics happen in a subset of the space in which the point cloud is embedded. The dimension of this set tends to be much smaller than the dimension of the ambient
space. This opens a door for reconstructing the dynamics from the data in a more manageable space. I'm interested in using topological tools such as Conley index to show the existence of fixed
points, periodic orbits and other invariant sets hidden in the experimental data.
|
{"url":"https://math.ou.edu/~mkramar/","timestamp":"2024-11-14T21:33:43Z","content_type":"text/html","content_length":"8915","record_id":"<urn:uuid:5fa2b94b-fed1-45d7-9e42-7096d9de39f2>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00201.warc.gz"}
|
Amortizing Bond Premium Using the Effective Interest Rate Method | AccountingCoach
Amortizing Bond Premium with the Effective Interest Rate Method
When a bond is sold at a premium, the amount of the bond premium must be amortized to interest expense over the life of the bond. In other words, the credit balance in the account Premium on Bonds
Payable must be moved to the account Interest Expense thereby reducing interest expense in each of the accounting periods that the bond is outstanding.
The preferred method for amortizing the bond premium is the effective interest rate method or the effective interest method. Under the effective interest rate method the amount of interest expense in
a given year will correlate with the amount of the bond’s book value. This means that when a bond’s book value decreases, the amount of interest expense will decrease. In short, the effective
interest rate method is more logical than the straight-line method of amortizing bond premium.
Before we demonstrate the effective interest rate method for amortizing the bond premium pertaining to a 5-year 9% $100,000 bond issued in an 8% market for $104,100 on January 1, 2023, let’s outline
a few concepts:
1. The bond premium of $4,100 must be amortized to Interest Expense over the life of the bond. This amortization will cause the bond’s book value to decrease from $104,100 on January 1, 2023 to
$100,000 just prior to the bond maturing on December 31, 2027.
2. The corporation must make an interest payment of $4,500 ($100,000 x 9% x 6/12) on each June 30 and December 31. This means that the Cash account will be credited for $4,500 on each interest
payment date.
3. The effective interest rate method uses the market interest rate at the time that the bond was issued. In our example, the market interest rate on January 1, 2023 was 4% per semiannual period for
10 semiannual periods.
4. The effective interest rate is multiplied times the bond’s book value at the start of the accounting period to arrive at each period’s interest expense.
5. The difference between Item 2 and Item 4 is the amount of amortization.
The following table illustrates the effective interest rate method of amortizing the $4,100 premium on a corporation’s bonds payable:
Please make note of the following points:
• Column B shows the interest payments required in the bond contract: The bond’s stated rate of 9% per year divided by two semiannual periods = 4.5% per semiannual period times the face amount of
the bond
• Column C shows the interest expense. This calculation uses the market interest rate at the time the bond was issued: The market rate of 8% per year divided by two semiannual periods = 4%
• The interest expense in column C is the product of the 4% market interest rate per semiannual period times the book value of the bond at the start of the semiannual period. Notice how the
interest expense is decreasing with the decrease in the book value in column G. This correlation between the interest expense and the bond’s book value makes the effective interest rate method
the preferred method.
• Because the present value factors that we used were rounded to three decimal places, our calculations are not as precise as the amounts determined by use of computer software, a financial
calculator, or factors with more decimal places. As a result, the amounts in year 2027 required a small adjustment.
If the company issues only annual financial statements and its accounting year ends on December 31, the amortization of the bond premium can be recorded at the interest payment dates by using the
amounts from the schedule above. In our example there was no accrued interest at the issue date of the bonds and there is no accrued interest at the end of each accounting year because the bonds pay
interest on June 30 and December 31. The entries for 2023, including the entry to record the bond issuance, are:
The journal entries for the year 2024 are:
The journal entries for 2025, 2026, and 2027 will also be taken from the schedule above.
Comparison of Amortization Methods
Below is a comparison of the amount of interest expense reported under the effective interest rate method and the straight-line method. Note that under the effective interest rate method the interest
expense for each year is decreasing as the book value of the bond decreases. Under the straight-line method the interest expense remains at a constant annual amount even though the book value of the
bond is decreasing. The accounting profession prefers the effective interest rate method, but allows the straight-line method when the amount of bond premium is not significant.
Notice that under both methods of amortization, the book value at the time the bonds were issued ($104,100) moves toward the bond’s maturity value of $100,000. The reason is that the bond premium of
$4,100 is being amortized to interest expense over the life of the bond.
Also notice that under both methods the corporation’s total interest expense over the life of the bond will be $40,900 ($45,000 of interest payments minus the $4,100 of premium received from the
purchasers of the bond when it was issued.)
Send Feedback
Please let us know how we can improve this explanation
No Thanks
|
{"url":"https://www.accountingcoach.com/bonds-payable/explanation/8","timestamp":"2024-11-02T14:00:50Z","content_type":"text/html","content_length":"116325","record_id":"<urn:uuid:ec2d2ec7-802b-4781-89ff-10e105ae1801>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00182.warc.gz"}
|
Next: MINIMUM STACKER CRANE PROBLEM Up: Routing Problems Previous: MINIMUM CHINESE POSTMAN FOR   Index
• INSTANCE: Multigraph
• SOLUTION: A collection of k cycles, each containing the initial vertex s, that collectively traverse every edge in the graph at least once.
• MEASURE: The maximum length of the k cycles.
• Good News: Approximable within 2-1/k [171].
Viggo Kann
|
{"url":"https://www.csc.kth.se/~viggo/wwwcompendium/node110.html","timestamp":"2024-11-04T12:31:50Z","content_type":"text/html","content_length":"3669","record_id":"<urn:uuid:6a7bb163-4e34-43bf-baa0-8dcec6ca7796>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00271.warc.gz"}
|
Agustin B.
What do you want to work on?
About Agustin B.
Algebra, Calculus, Geometry, Trigonometry, Algebra 2, Midlevel (7-8) Math, Statistics, Pre-Calculus, Calculus BC
Mathematics, General Major from Universidad Abierta Interamericana - UAI
Career Experience
I'm a mathematician, Ph.D. researcher, and professor.
I Love Tutoring Because
I like sharing my knowledge and expertise with others, to inspire and contribute to the growth and development of kids.
Other Interests
Listening to Music, Playing Music, Sports
Math - Geometry
great teaching I was able to understand what he was teaching me and he made sure to answer any questions I had. this was a great session I will 100% come back if I need help again
Math - Geometry
Great tutor I understand my work better now
Math - Geometry
This person really helped me understand the n formula!
Math - Pre-Calculus
Great :D, and not begging for a survey like most tutors, just was actually a good tutor
|
{"url":"https://www.princetonreview.com/academic-tutoring/tutor/agustin%20b--9310742?s=ap%20statistics","timestamp":"2024-11-04T08:18:53Z","content_type":"application/xhtml+xml","content_length":"208688","record_id":"<urn:uuid:6331d3f1-9e73-4bc0-b72f-f9a7bea425aa>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00593.warc.gz"}
|
Numerical studies on shock wave and boundary layer interaction in the high-load turbine rotor cascades
Gas turbine engines are one of the most dominant aero power units today. With the development of aero engines, the design requirements for thrust-to-weight ratio keep increasing (Badran, 1999). In
order to increase the thrust-to-weight ratio, the turbines continue to develop towards high-load, transonic and large expansion ratio, which inevitably results in higher exit Mach number, the
formation of strong shock wave, and the SWBLI (Lo et al., 2017). These unsteady phenomena reduce the efficiency of turbine and increase risk of fatigue and thermal failure. Therefore, it is necessary
to study the interaction mechanism and regulation mechanism between shock wave and boundary layer in transonic high-load turbine.
The SWBLI was widely distributed in outer and inner flows of supersonic aircrafts. The thickness of boundary layer was increased and the separation of boundary layer could happen by the SWBLI
resulting in the increase of flow loss and decrease of stability. Mechanism of SWBLI was investigated by experiment, numerical simulation and theoretical analysis since SWBLI was first discovered in
1930s and more details were found with the development of numerical simulation method. Some reviews (Dolling, 2001; Babinsky and Harvey, 2011; Gaitonde, 2015) summarized the current state of
knowledge about SWBLI. In order to decrease the influence of SWBLI on performance of supersonic aircrafts, control methods of SWBLI were concerned and investigated. Depending on whether the control
device is adjustable, the control methods for SWBLI could be divided into passive control and active control, including vortex generators (Titchener and Babinsky, 2013), bump (Zhang et al., 2019),
boundary layer bleed (Bagheri et al., 2021), plasma actuation (Yang et al., 2022) and other control methods. These control methods were proven effective in specific situations.
For turbine, the SWBLI was also investigated because it was common in high-load turbine as a result of the high Mach number and the formation of the shock waves. The SWBLI over the neighbouring vane
and the downstream row resulted in significant efficiency reduction and high cycle fatigue (Paniagua et al., 2008). Bian et al. (2020) studied the interaction mechanisms of shock wave with the
boundary layer and wake in the high-load nozzle guide vane by hybrid RANS/LES. The results showed that strong shock waves induced boundary layer separation, while the presence of the separation
bubble could in turn lead to a Mach reflection phenomenon. Sonoda et al. (2006, 2009) systematically studied the influence of local geometric optimization on turbine flow losses. The results found
that local geometric optimization of the suction surface profile of the guide vane could change the flow condition of the boundary layer, control the effect of SWBLI by changing one strong reflection
shock wave into two weak reflection shock waves. Based on the same idea, Lei et al. (2017) and Zhao et al. (2019) controlled flow condition of boundary layer and changed the features of SWBLI by
grooved vane. There were some studies on the mechanism and control method of SWBLI in high-load turbine but the mechanism of SWBLI in high-load turbine need further studies because of the
unsteadiness of SWBLI in order to prepare for study on control method of SWBLI to increase the efficiency and the stability.
In the present work, rotor blades with loading coefficients range from 1.6 to 2 were designed and the simulations of cascade were carried out by DDES to observe the influence of the loading
coefficient, the exit isentropic Mach number and the incidence angle on the structure of shock wave and SWBLI.
Investigated model
The design of rotor blade was based on the rotor blade of TTM-Stage (Erhard, 2000). Significant changes to the rotor blade of TTM-Stage were performed in order to achieve the high loading
coefficient. The key geometry parameters of the blade in the cascade with the loading coefficient of 2.0 are listed in Table 1. The other blades with smaller loading coefficients were obtained by
modifying turning angle based on the blade with the loading coefficient of 2.0.
Numerical method
For DDES, the SA-DDES model was employed to deal with the turbulent flow in time-dependent Navier-Stokes equations using the solver embedded into NUMECA software package. The DDES model provided
additional shielding functions to ensure that the model does not switch to Large Eddy Simulation (LES) within the boundary layer in order to avoid the limitation of high sensitivity to grid spacing.
The dual time stepping approach proposed was employed. At each physical time step, a steady-state problem was solved in a pseudo time. In order to speed up the convergence, the multigrid strategy and
the implicit residual smoothing method were applied.
Boundary condition
The total pressure, the total temperature and the velocity vector on the inlet boundary were specified. The Mach number was obtained by extrapolation from the interior field. At the outlet boundary,
the static pressure was specified. The remaining dependent variables at the outlet boundary were obtained from the interior field through the extrapolation with the zero-order.
Computational mesh
The grid was generated by the Autogrid5 module of the NUMECA software. The grid independence study was performed to confirm the reliability of the numerical simulation and the Grid independence
results is shown in Figure 1. The total pressure ratio represented the ratio of the outlet total pressure to inlet total pressure. The number of 9million grids was determined for calculation based
on calculation accuracy and calculation time. Figure 2 shows the grids of the blade in cascade. The first layer grid scale was 5×10^−6m and the average y+ value was about 1. The physical time step
was set to 1×10^−6s.
The steady calculation was carried out with 400 iterations and the DDES calculation was carried out with 350 iterations to ensure the convergence of the results. The initial condition of the DDES
calculation was based on the result of the steady calculation. Figure 3 shows the root mean square global residual of the density of the RANS calculation reached 10^−5 at the exit isentropic Mach
number of 0.9 and the incidence angle of 0 for the blade with the loading coefficient of 2.0. The definition of the residual was the sum of the fluxes on all the faces of each cell and the data shown
in Figure 3 was the logarithm of the root mean square global residual. The convergence of other conditions was similar with the convergence shown in Figure 3. The highest error in inlet mass flow and
outlet mass flow was less than 0.05% and the global residual was more than 10^−5 for the DDES results of all conditions.
Results and discussion
Comparison of flow fields with different exit isentropic Mach numbers
Figure 4 shows the total pressure loss coefficient at exit at loading coefficient of 2.0 and incidence angle of 0° with different exit isentropic Mach numbers. The total pressure loss coefficient is
expressed by Equation 1.
where P0∗ is the inlet total pressure, P∗ is the local total pressure.
The exit isentropic Mach number is expressed by Equation 2.
where P0freestream is the total pressure in the freestream outside of the boundary layers, usually the inlet total pressure is used, P1 is the exit static pressure and γ is the ratio of specific
heat, which is 1.4 for air. When the exit static pressure is determined, the exit isentropic Mach number is changed by changing the inlet total pressure.
As Figure 4 shown, the total pressure loss coefficient increases with the increase of exit isentropic Mach number because of the higher Mach number and the formation of the shock wave. Figure 5 shows
the total pressure loss coefficient along pitch at exit of mid-span with different exit isentropic Mach numbers. The total pressure loss coefficient shows the similar distribution when the exit
isentropic Mach number increases from 0.9 to 1.1. There is a high loss region at 0.87 pitch corresponding to the wake where the loss rises from 0.105 to 0.166. At the exit isentropic Mach number of
1.2, the distribution of the total pressure loss coefficient is different from others, which is basically uniform along the pitch indicating the main flow and the wake are strongly blended.
Figures 6 and 7 show the distribution of Mach number and pressure coefficient at mid-span (Cp). The pressure coefficient is expressed by Equation 3.
where P is the local static pressure, P1 is the exit static pressure and P0∗ is the inlet total pressure.
As Figure 6 shown, there is no shock wave formation at exit isentropic Mach number of 0.9. With the increase of the exit isentropic Mach number, the shock waves are generated, consisted of incident
shock wave, reflected shock wave and Mach stem and the Mach number preceding the shock wave increases. The position of the shock wave at the suction surface moves towards the trailing edge and the
angle between incident shock wave and reflected shock wave increases.
With the increase of exit isentropic Mach number, the sudden Cp rise in suction surface decreases in Figure 7, but the sudden static pressure rise increases according to the definition of Cp because
of the difference of the inlet total pressure, which is induced by the shock wave and boundary layer interaction, indicating the intensity of the shock wave increases.
Figure 8 shows the distribution of axial skin friction coefficient (Cf[z]) on the suction surface at mid-span. The axial skin friction coefficient is expressed by Equation 4.
where τωz is the local axial wall shear stress, ρref and Vref are the reference density and reference velocity, respectively, for which the average inlet density and the average inlet velocity are
Cf[z] can be used to represent the state of the boundary layer. The Cf[z] less than 0 indicates there is a backflow region on the suction surface of the blade. The point of separation and
reattachment of the flow is represented by the Cf[z] of 0, which can be used to calculate the length of the separating bubble. In Figure 8, the Cf[z] increases on the suction surface indicating that
the boundary layer transitions from laminar to turbulent flow. The Cf[z] suddenly drops to less than zero at the location where the shock wave occurs, which represents the point of separation. At
various exit isentropic Mach numbers, the location of the separation bubble matches that of the shock wave, which moves towards the trailing edge. There is no separation bubble formation at exit
isentropic Mach number of 0.9 and the length of the separation bubble increases from 1.18% to 1.29% Cx as the exit isentropic Mach number increases from 1.0 to 1.1. However, the length of the
separation bubble is 0.73% Cx at the exit isentropic Mach number of 1.2, which is less than that at lower exit isentropic Mach number. It appears to show that the length of separation bubble does not
grow as the intensity of the shock wave increases. The characteristics of the separation bubble at mid-span will be influenced by the secondary flow from the endwall as the exit Mach number
Comparison of flow fields with different loading coefficients
Figure 9 shows the total pressure loss coefficient at exit at exit isentropic Mach number of 1.2 and incidence angle of 0° with different loading coefficients. The total pressure loss coefficient
reaches the maximum value of 0.097 at loading coefficient of 2.0 and reaches the minimum value of 0.090 at loading coefficient of 1.8 indicating there is not significant difference in the total
pressure loss coefficient. Figure 10 shows the total pressure loss coefficient along pitch at exit of mid-span with different loading coefficients. The distribution of the total pressure loss
coefficient and the position of the wake are different when the loading coefficient increases from 1.6 to 2.0 because the change in the loading coefficient is achieved by changing the turning angle.
There is a high loss region corresponding to the wake at load coefficient of 1.6 and 1.8 and there is another high loss region at 0.7 pitch caused by the SWBLI at load coefficient of 1.6.
Figures 11 and 12 show the distribution of Mach number and Cp at mid-span. With the increase of the loading coefficient, the position of the shock wave moves towards the leading edge and the
interaction between the shock wave and the wake strengthens, resulting in the uniformity of outlet flow field as the Figure 11 shown. The Mach number preceding the shock wave at load coefficient of
1.6 is significantly higher than others. Figure 12 shows the same change of the position of the shock wave and the Cp rise induced by the shock wave. The sudden Cp rise induced by the shock wave at
load coefficient of 1.6 is higher than others, indicating the intensity of the shock wave is higher than others.
Figure 13 shows the distribution of Cf[z] on the suction surface at mid-span. The transition at loading coefficient of 1.6 is completed earlier than that at the loading coefficient of 1.8 and 2.0.
The length of the separation bubble induced by the shock wave is obviously longer than others at loading coefficient of 1.6, which leads to the higher total pressure loss coefficient at loading
coefficient of 1.6 than at load coefficient of 1.8. With the increase of the loading coefficient, the length of the separation bubble decreases from 3.8% to 0.73% Cx.
Comparison of flow fields with different ancidence angles
Figure 14 shows the total pressure loss coefficient at exit at loading coefficient of 2.0 and exit isentropic Mach number of 1.2 with different incidence angles. The total pressure loss coefficient
reaches the maximum value of 0.119 at incidence angle of +15° and reaches the minimum value of 0.091 at incidence angle of negative 7.5°, indicating the loss increase with the large positive
incidence angle. Figure 15 shows the total pressure loss coefficient along pitch at exit of mid-span with different incidence angles. At the negative incidence angles, there is a high loss region at
0.94 pitch corresponding to the wake and there is another high loss region at 0.6 pitch because of the flow separation induced by the SWBLI. The distribution of the total pressure loss coefficient at
incidence angle of 0° is basically uniform because of the strong blend of the main flow and the wake. There is a high loss region at 0.94 pitch corresponding to the wake at incidence angle of
positive 7.5°, however, the distribution is absolutely different at incidence angle of positive 15°. The high loss area is generated by separated flow from the suction surface, not by the wake.
Figure 16 shows the distribution of Mach number at mid-span. There is a large flow separation on pressure surface at incidence angle of −15° and a large flow separation on suction surface at
incidence angle of +15°. The large flow separation on pressure surface at incidence angle of −15° does not influence the trailing edge shock wave. The structure of the shock wave and the flow field
are similar with different incidence angles except the incidence angle of +15°. There is the shock wave at the leading edge of the suction surface at incidence angle of +15° resulting in the large
flow separation, the bending of the shock wave at the trailing edge and the increase of the loss. The intensity of the wake and the interaction between the shock wave and the wake are reduced,
leading to the different distribution of the total pressure loss coefficient as Figure 15 shown.
Figure 17 shows the distribution of Cp at mid-span. With the variation of the incidence angles, the position and the intensity of the sudden Cp rise induced by the shock wave are essentially the same
but the intensity of the sudden Cp rise at the incidence angle of +15° significantly increases because the shock wave is bent and is more perpendicular to the direction of the flow. Additionally,
there is a sudden Cp rise at the leading edge of the suction surface representing the formation of the shock wave.
Figure 18 shows the distribution of Cf[z] on the suction surface at mid-span. The Cf[z] quickly drops when the shock wave occurs on the suction surface and the separation bubble is generated with
diffident incidence angles except the incidence angle of +15°. At the incidence angle of +15°, the boundary layer at the trailing edge of the suction surface thickens but no separation bubble is
generated because there is a separation bubble formation with the length of 0.8 Cx at the leading edge of the suction surface. As the flow develops, the open separation occurs following the
separation bubble as the Figure 16 shown. The position of the separation bubble is almost the same and corresponds to the position of the shock wave at other incidence angles. The length of the
separation bubble is 1.11% Cx at negative incidence angles, 0.73% Cx at the incidence angle of 0° and 1.12% Cx at the incidence angle of +7.5°.
The rotor blades with high loading coefficient were designed with the aim of investigating the effects of the loading coefficient, the incidence angle, and the exit isentropic Mach number on the
shock wave as well as the SWBLI in the cascade. The research is summarized as follows.
1. With the increase of exit isentropic Mach numbers, the total pressure loss coefficient increased by 2.8% and compared to others, the distribution of total pressure loss coefficient at the exit
isentropic Mach number of 1.2 is basically uniform indicating the main flow and the wake are strongly blended. The generated shock waves at mid-span consisted of incident shock wave, reflected
shock wave and Mach stem. The position of the shock wave at the suction surface moved towards the trailing edge. The intensity of the shock wave as well as the angle between incident shock wave
and reflected shock wave increases. However, the length of separation bubble did not grow from 1.29% Cx as the intensity of the shock wave increases because of the influence of the secondary flow
from endwall.
2. With the increase of loading coefficients, the total pressure loss coefficient increased by 0.5%. The distribution of the total pressure loss coefficient along pitch and the position of the wake
are different because of the difference of the turning angle. The position of the shock wave at mid-span moved towards the leading edge and the intensity of the shock wave at load coefficient of
1.6 was significantly higher than others, leading to the longer separation bubble than others with 3.8% Cx and the higher total pressure loss coefficient than that at load coefficient of 1.8
because of the influence of the obvious vortices on the upper and lower endwalls of the suction surface.
3. With the variation of incidence angles, the total pressure loss coefficient increased by 2.2% for incidence angle from 0° to +15°, indicating the loss increase with the large positive incidence
angle. The position of the shock wave at the trailing edge was almost not changed. At the negative incidence angles, there were the same distribution of the total pressure loss coefficient, the
same intensity of the shock wave and the same separation bubble with the length of 1.11% Cx. The large flow separation was generated near the leading edge of the pressure surface but has no
influence on flow filed. At the positive incidence angles, the flow filed as well as the shock wave at incidence angle of +7.5° were basically the same as that at negative angles. At the
incidence angle of +15°, the intensity of the shock wave at the trailing edge significantly increased and the shock wave was bent but there was no separation bubble formation because the large
flow separation was happened near the leading edge of the suction surface following the separation bubble with length of 0.8% Cx induced by the shock wave, resulting in the different distribution
of total pressure loss coefficient and the increase of the loss with large positive incidence angle.
Non-dimensional distance from the wall
Total pressure in the freestream outside of the boundary layers
Local axial wall shear stress
|
{"url":"https://journal.gpps.global/Numerical-studies-on-shock-wave-and-boundary-layer-interaction-in-the-high-load-turbine,186056,0,2.html","timestamp":"2024-11-11T01:51:14Z","content_type":"application/xhtml+xml","content_length":"116360","record_id":"<urn:uuid:bee2b698-4e6a-48e3-8bed-adff732b7631>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00209.warc.gz"}
|
Which mathematical concept can’t be expressed with Roman numerals? - Answer
Which mathematical concept can’t be expressed with Roman numerals?
Here is the question : WHICH MATHEMATICAL CONCEPT CAN’T BE EXPRESSED WITH ROMAN NUMERALS?
Here is the option for the question :
• Multiplication
• Negative numbers
• 0
• Odd numbers
The Answer:
And, the answer for the the question is :
Until the 12th century, when Arabic numerals replaced Roman numerals, the idea of 0 was not widely employed in Western mathematics. The Romans’ number system served them well for engineering and
collecting records, but it was limited in its ability to progress abstract mathematics because it lacked the number 0.
The Roman numeral system is an ancient numerical system that was used in the Roman Empire and is still used today in various contexts. The system uses a combination of letters to represent numbers,
with each letter having a specific value. However, there is one mathematical concept that cannot be expressed with Roman numerals, and that is the concept of zero.
Zero is a fundamental concept in mathematics and is used to represent the absence of a quantity or value. The concept of zero was not present in the Roman numeral system, as the system was based on a
counting system and did not include a symbol for zero. Instead, the system used a form of subtraction to represent values that were not directly represented by a single letter.
For example, to represent the number four in the Roman numeral system, the letter IV was used, which represents the value of five minus one. Similarly, to represent the number nine, the letter IX was
used, which represents the value of ten minus one. This system of subtraction allowed for the representation of a wide range of values, but it was not capable of expressing the concept of zero.
The absence of a symbol for zero in the Roman numeral system can make certain calculations and mathematical operations more challenging. For example, representing the number 2021 in Roman numerals
requires the use of multiple letters, including M (1000), XX (20), and I (1). In contrast, representing the same number in the decimal system requires only four digits, including 2, 0,2, and 1.
The concept of zero was first developed by Indian mathematicians in the fifth century, and it was later adopted by Arab mathematicians and spread to Europe during the Middle Ages. The inclusion of
zero in the decimal system revolutionized mathematics and allowed for the development of new concepts and techniques, such as algebra and calculus.
the concept of zero remains an essential part of mathematics and is used in a wide range of applications, from basic arithmetic to advanced physics and engineering. Its inclusion in the decimal
system has made mathematical calculations more efficient and accurate, and it has opened up new avenues of research and discovery in the field of mathematics.
the Roman numeral system is an ancient numerical system that was used in the Roman Empire and is still used today in various contexts. However, the system is incapable of expressing the concept of
zero, which is a fundamental concept in mathematics. The absence of a symbol for zero in the Roman numeral system can make certain calculations and mathematical operations more challenging. Despite
its absence from the Roman numeral system, the concept of zero remains an essential part of mathematics and is used in a wide range of applications, reflecting its enduring legacy and importance in
the field of mathematics.
|
{"url":"https://apaitu.org/which-mathematical-concept-can-t-be-expressed-with-roman-numerals/","timestamp":"2024-11-09T07:57:39Z","content_type":"text/html","content_length":"58233","record_id":"<urn:uuid:4cd4d150-bcda-4554-8270-8a1554cc611f>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00604.warc.gz"}
|
Math Morphing Proximate and Evolutionary Mechanisms
The simplest algorithm for generating a representation of the Mandelbrot set is known as the escape time algorithm. A repeating calculation is performed for each x, y point in the plot area and based
on the behavior of that calculation, a color is chosen for that pixel.
The x and y locations of each point are used as starting values in a repeating, or iterating process of calculations. The result of the previous iteration defines the starting values for the next
iteration. The result of the previous iteration is also checked during the next iteration to see if a critical escape condition or bailout has been reached. If and when that condition is reached, the
calculation is stopped, the pixel is drawn, and the next x, y point is examined. For some starting values, escape occurs quickly, after only a small number of iterations. For starting values very
close to but not in the set, it may take hundreds or thousands of iterations to escape. For values within the Mandelbrot set, escape will never occur. The higher the maximum number of iterations, the
more detail and subtlety emerge in the final image, but the longer time it will take to calculate the fractal image.
Escape conditions can be simple or complex. Because no complex number with a real or imaginary part greater than 2 can be part of the set, a common bailout is to escape when either coefficient
exceeds 2. A more computationally complex method, but which detects escapes sooner, is to compute the distance from the origin using the Pythagorean Theorem, and if this distance exceeds two, the
point has reached escape. More computationally-intensive rendering variations such as Buddhabrot detect an escape, then use values iterated along the way.
The color of each point represents how quickly the values reached the escape point. Often black is used to show values that fail to escape before the iteration limit, and gradually brighter colors
are used for points that escape. This generates a visual representation of how many cycles were required before reaching the escape condition.
|
{"url":"https://teachersinstitute.yale.edu/curriculum/units/2009/5/09.05.09/11","timestamp":"2024-11-02T03:14:11Z","content_type":"text/html","content_length":"39940","record_id":"<urn:uuid:01cef4af-ec9c-4d92-ab34-f95e48317ce9>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00604.warc.gz"}
|
Overview of Multistage Filters
Multistage filters are composed of several filter stages connected in series or parallel.
When you need to change the sample rate of a signal by a large factor, or implement a filter with a very narrow transition width, it is more efficient to implement the design in two or more stages
rather than in one single stage. When the design is long (contains many coefficients) and costly (requires many multiplications and additions per input sample), the multistage approach is more
efficient to implement compared to the single-stage approach.
Implementing a multirate filter with a large rate conversion factor using multiple stages allows for a gradual decrease or increase in the sample rate, allowing for a more relaxed set of requirements
for the anti-aliasing or anti-imaging filter at each stage. Implementing a filter with a very narrow transition width in a single stage requires many coefficients and many multiplications and
additions per input sample. When there are strict hardware requirements and it is impossible to implement long filters, the multistage approach acts as an efficient alternative. Though a multistage
approach is efficient to implement, design advantages come at the cost of increased complexity.
Multistage Decimator
Consider an I-stage decimator. The overall decimation factor M is split into smaller factors with each factor being the decimation factor of the corresponding individual stage. The combined
decimation of all the individual stages must equal the overall decimation. The combined response must meet or exceed the given design specifications.
The overall decimation factor M is expressed as the product of smaller factors:
$M={M}_{1}{M}_{2}\cdots {M}_{I}$
where M[i] is the decimation factor for stage i. Each stage is an independent decimator. The sample rate at the output of each i^th stage is:
${f}_{i}=\frac{{f}_{i-1}}{{M}_{i}},\text{ }\text{ }i=1,2,...,I$
If M ≫ 1, the multistage approach reduces computational and storage requirements significantly.
Multistage Interpolator
Consider a J-stage interpolator. The overall interpolation factor L is split into smaller factors with each factor being the interpolation factor of the corresponding individual stage. The filter in
each interpolator eliminates the images introduced by the upsampling process in the corresponding interpolator. The combined interpolation of all the individual stages must equal the overall
interpolation. The combined response must meet or exceed the given design specifications.
The overall interpolation factor L is expressed as the product of smaller factors:
$L={L}_{1}{L}_{2}\cdots {L}_{J}$
where L[j] is the interpolation factor for stage j. Each stage is an independent interpolator. The sample rate at the output of each j^th stage is:
${f}_{j}={L}_{j}{f}_{j-1},\text{ }\text{ }j=1,2,...,J$
If L ≫ 1, the multistage approach reduces computational and storage requirements significantly.
Determine the number of stages and rate conversion factor for each stage
For a given rate conversion factor R, there is more than one possible configuration of filter stages. The number of stages and the rate conversion factor for each stage depends on the number of
smaller factors R can be divided into. An optimal configuration is the sequence of filter stages leading to the least computational effort, with the computational effort measured by the number of
multiplications per input sample, number of additions per input sample, and, in general, the total number of filter coefficients.
In the optimal configuration of multistage decimation filters, the shortest filter appears first and the longest filter (with the narrowest transition width) appears last. This sequence ensures that
the filter with the longest length operates at the lowest sample rate, thereby reducing the cost of implementing the filter significantly.
Similarly, in the optimal configuration of multistage interpolation filters, the longest filter appears first and the shortest filter appears last. This sequence again ensures that the filter with
the longest length operates at the lowest sample rate.
The designRateConverter, designMultistageDecimator, and designMultistageInterpolator functions in DSP System Toolbox™ automatically determine the optimal configuration, which includes determining the
number of stages and the rate conversion factor for each stage. An optimal configuration leads to the least computational effort, and you can measure the cost of such an implementation using the cost
function. For an example, see Multistage Rate Conversion.
See Also
Related Topics
|
{"url":"https://kr.mathworks.com/help/dsp/ug/overview-of-multistage-filters.html","timestamp":"2024-11-14T11:17:56Z","content_type":"text/html","content_length":"75745","record_id":"<urn:uuid:08cf1dda-26bc-4fa9-9f44-a85d5f2d915c>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00150.warc.gz"}
|
Colloquium2022S – Physics |
Missouri S&T
Physics Colloquium Spring 2022
Thursdays 4:00 p.m. 104 Physics or online via Zoom.
Contact colloquium organizer Dr. Shun Saito at saitos@mst.edu for the Zoom link.
(Link to main colloquium page)
Title: Pairing at Zero Density
Abstract: As far as we know all superconductors are condensates of paired electrons. Thus, pairing is an essential stage in the formation of a superconductor. According to standard lore, the key to
overcoming the strong electron-electron repulsion lies in their ability to exchange phonons, which vibrate at low frequency (the Debye frequency), orders of magnitude smaller than the electronic
frequency (the Fermi energy). Moreover, the superconducting transition temperature is an emergent scale that is exponentially sensitive to the density of free charge carriers, thus bounding the
density from below. The discovery of superconductivity in doped materials (semimetals, semiconductors, doped Mott insulators, twisted bilayer graphene etc.) challenges this understanding by defying
these basic requirements. In this talk I will discuss these issues in detail. I will then present a concrete example exhibiting superconductivity at any charge density including zero; A Dirac
semimetal in a crystal that is undergoing a structural quantum phase transition. Thus, providing a new mechanism for superconductivity that does not suffer from a vanishing density and possibly opens
a path to understand the low-density superconductors.
Title: Ultrafast population inversion in nitrogen molecular ions by intense laser fields
Abstract: When an intense, femtosecond laser pulse is focused in air, coherent emission referred to as “air lasing” is generated. The lasing emission at 391 nm, corresponding to the transition from
the electronically excited B state to the electronic ground X state of N2+ is particularly interesting, since strong-field ionization of N2 is not expected to produce B-X population inversion.
By observing air lasing using few-cycle laser pulses, we showed in 2015 [1] that the B-X population inversion is formed on a few-fs time scale through the post-ionization excitation of N2+. In recent
measurements, we showed that the 391 nm lasing signal can be enhanced by 5 orders of magnitude by combining a polarization-gated 800 nm driving pulse with a 1600 nm IR pulse [2], and that the lasing
signal contains signatures of the coherent rotational, vibrational, and electronic motion of N2+ [3].
In the talk, I will give an overview of air lasing, explain the mechanism behind the ultrafast B-X population inversion in N2+, and discuss the theoretical simulation of the excitation process.
[1] H. Xu, E. Lötstedt, A. Iwasaki, and K. Yamanouchi, Nat. Commun. 6, 8347 (2015).
[2] H. Li et al., Phys. Rev. Lett. 125, 053201 (2020).
[3] T. Ando et al., Phys. Rev. Lett. 123, 203201 (2019).
Search for Light Bosons with King and Super-King Plots
There are many defects in the standard model of particle physics, such as the evident need
for some form of dark matter. The main motivation for the present talk is the potential to
complement or even extend what can be learned from high-energy particle experiments with
high accuracy atomic-physics measurements at low energy in the search for new physics be-
yond the standard model. One such search is the use of high precision measurements of the
atomic isotope shift in a sequence of isotopes that would be sensitive to an electron-neutron
interaction mediated by light bosons. If such an interaction existed, it would presumably
vary linearly with the number of neutrons in the nucleus, and so would produce a systematic
deviation in the isotope shift measurements. The so-called King plot provides a sensitive
method to search for such deviations. By taking second differences, we have recently devel-
oped a super-King plot [1] that can be applied to light heliumlike ions where high-precision
calculations are possible to assist the interpretation of laboratory measurements of isotope
shifts. Results of calculations and prospects for future experiments will be presented.
[1] G.W.F. Drake, H.S. Dhindsar and V.J. Martin, Phys. Rev. A 105, L060801 (2021).
Title: Ultrafast chirality: twisting light to twist electrons
I will describe several new, extremely efficient approaches to chiral discrimination and enantio-sensitive molecular manipulation, which take advantage of ultrafast electronic response [1,2]. One of
them is based on the new concept of synthetic chiral light [3,4], which can be used to trigger bright nonlinear optical response in the molecule of a desired handedness while keeping its mirror twin
dark in the same frequency range. The other is based on the new concept of geometric magnetism in photoionization of chiral molecules and leads to a new class of enantiosensitive observables in
photoionization [5]. Crucially, the emergence of these new observables is associated with ultrafast excitation of chiral electronic or vibronic currents prior to ionization and can be viewed as their
unique signature.
[1] S Beaulieu et al, Photoexcitation circular dichroism in chiral molecules, Nature Physics 14 (5), 484 (2018)
[2] AF Ordonez, O Smirnova, Generalized perspective on chiral measurements without magnetic interactions, Physical Review A 98 (6), 063428, 2018
[3] D Ayuso et al, Synthetic chiral light for efficient control of chiral light–matter interaction, Nature Photonics 13 (12), 866-871, (2019)
[4] D. Ayuso et al "Enantio-sensitive unidirectional light bending”, Nat Commun 12, 3951 (2021)
[5] AF Ordonez, D Ayuso, P. Decleva, O Smirnova “Geometric magnetism and new enantio-sensitive observables in photoionization of chiral molecules” http://arxiv.org/abs/2106.14264
Title: Universal Physics of 2 or 3 or 4 Strongly Interacting Particle
By Chris Greene, Purdue University
Recent developments in the field of a few interacting particles with
nonperturbative interactions will be reviewed, focusing on ultracold
atomic physics, but with one recent application to the few-nucleon
problem as well. Some of these studies are intimately connected with
the Efimov effect, while others go beyond the standard Efimov effect
with its remarkable infinity of long-range energy levels. Some of our
relevant references addressing those topics are listed below.
[1] Nonadiabatic Molecular Association in Thermal Gases Driven by
Radio-Frequency Pulses, Phys. Rev. Lett. 123, 043204 (2019), with Panos
Giannakeas, Lev Khaykovich, and Jan-Michael Rost.
[2] Nonresonant Density of States Enhancement at Low Energies for Three
or Four Neutrons, Phys. Rev. Lett. 125, 052501 (2020), with Michael
Higgins, Alejandro Kievsky, and Michele Viviani
[3] Efimov physics implications at p-wave fermionic unitarity, Phys Rev
A 105, 013308 (2022), with Yu-Hsin Chen.
[4] Ultracold Heteronuclear Three-Body Systems: How Diabaticity Limits
the Universality of Recombination into Shallow Dimers, Phys. Rev. Lett.
120, 023401 (2018), with Panos Giannakeas.
Simulating the Formation of our Milky Way Galaxy
Within the cosmic web, galaxies like our own Milky Way form as gas flows along cosmic filaments into dark-matter halos, fueling the formation of stars, while the resultant feedback from stars drives
strong outflows of gas. Understanding this nonlinear interplay between cosmic inflows and feedback-driven outflows is one of the most significant questions in astrophysics and cosmology, and it
requires a new generation of supercomputer simulations that can achieve high dynamic range to resolve the scales of stars within a cosmological environment. I will describe how we use massively
parallelized cosmological zoom-in simulations to model the physics of galaxy formation at unprecedented resolution. I will discuss new insight into the formation of our Milky Way galaxy, including
the faintest-known galaxies that orbit around it. These low-mass galaxies trace structure formation on the smallest cosmological scales and have presented the most significant challenges to the cold
dark matter paradigm. I will describe how these new generations of simulations are allowing us to shed light on dark matter.
TITLE: Working with Giorgio Parisi: Quantum Field Theory in Action
Quantum field theories such as quantum electrodynamics belong to the jewels of
theoretical physics: They have allowed us to understand energy levels of simple
atomic systems to an accuracy of 13 or 14 decimals, and to calculate the
anomalous magnetic moment of the electron to 10 digits. On a completely
different footing, the renormalization-group analysis of the critical point of
the so-called N-vector model has allowed us to calculate critical exponents of
phase transitions to unprecedented accuracy. Yet, all these methods rely on
perturbative methods, which are encapsulated in so-called Feynman diagrams.
These are diagrams which illustrate the perturbative corrections to a physical
quantity in perturbation theory. In higher orders, the number of diagrams grows
factorially, and all perturbation series eventually diverge, in the form of
asymptotic series. While this fact does not significantly diminish the
predictive power of quantum electrodynamics, in view of the smallness of the
fine-structure constant alpha ~ 1/137, which is the quantum
electrodynamic coupling parameter, the problem manifests itself prominently in
the N-vector model, where the critical point (zero of the so-called $\beta$
function) is reached for coupling parameters of order unity. Decades of
efforts into the calculation of higher-order Feynman diagrams typically end at
the five-loop level (quantum electrodynamics) or seven-loop level (N-vector
model). With Giorgio Parisi (Nobel Laureate, 2021, Rome) and Jean Zinn-Justin
(CEA Saclay), we have been finding ways to overcome the predictive limits of
perturbation theory, paving the way for diagrammatic expansions about infinite
loop order, and showing that, in infinite loop order, the evaluation of Feynman
diagrams becomes, again, a manageable task.
Unraveling the superconducting state of UTe2
Priscila F. S. Rosa
Los Alamos National Laboratory, Los Alamos, NM 87545 USA
Spin-triplet superconductors are a promising route in the search for topological superconductivity, and UTe2 is a recently discovered contender. In this talk, I will present a brief overview of key
experimental results on the superconducting state of UTe2 as well as some of its outstanding puzzles. I will then focus on recent developments in sample synthesis combined with thermodynamic
measurements that shed light on the role of disorder and magnetic fluctuations in UTe2. At the end of the talk, I will highlight some of the pressing outstanding open questions regarding the
superconducting order parameter of UTe2.
Nonlinear gravity in the late Universe
In this talk I will describe a number of novel ideas that arise in a nonlinear general relativistic treatment of cosmology, and recent progress modeling cosmological observables in this context.
These advances provide us with insight into subtle gravitational effects that allow us to test general relativity in new regimes; yet if unaccounted for, these same effects may mislead us in our
search for new physics. I will describe how these models not only allow us to study the behavior of general relativity at a nonlinear level, but provide us with a way to test the fundamental
assumptions our cosmological models are built upon.
Imaging Domain switching and Revealing Emergent Orders in 2D Antiferromagnets
The family of monolayer two-dimensional (2D) materials hosts a wide range of interesting phenomena, including superconductivity, charge density waves, topological states and ferromagnetism, but
direct evidence for antiferromagnetism in the monolayer has been lacking. Antiferromagnets have attracted enormous interest recently in spintronics due to the absence of stray fields and their
terahertz resonant frequency. Despite the great advantages of antiferromagnetic spintronics, controlling and directly detecting Neel vectors in 2D materials have been challenging. In my talk, I will
show that we have developed a sensitive second harmonic generation (SHG) microscope and detected long-range Neel antiferromagnetic (AFM) order down to the monolayer in MnPSe3[1]. In MnPSe3, we
observed the switching of an Ising-type Neel vector reversed by the time-reversal operation. We rotated them by an arbitrary angle irrespective of the lattice by applying uniaxial strain. The phase
transition in the presence of strain in MnPSe3 falls into the Ising universality class instead of the XY type, and the Ising Neel vector is locked to the strain. I will also talk about using the
newly developed optical confocal microscopy to image other emergent orders in 2D antiferromagnets[2, 3,4].
1. Ni et al. Nature Nanotechnology 16, 782-787 (2021)
2. Ni et al. Phys. Rev. Lett. 127, 187201 (2021)
3. Zhang et al. Nature Photonics (2022)
4. Ni, et al. Nano Letters (2022)
Fuller Prize Finalists 2022
Reece Beattie-Hauser: Scalar susceptibility of a diluted classical XY model
(Advisor: Dr. Thomas Vojta)
Charlie Kropp: Investigation of Time-Delay, Transmission, and Deposition Eigenchannels
(Advisor: Dr. Alexey Yamilov)
Anthony Lonsdale: Application of Molecular Spin Dynamics to Thermal Transport Problems
(Advisor: Dr. Aleksandr Chernatynskiy)
Ethan Pham: Electric Field Exfoliation for Two-Dimensional Nanolayered Materials
(Advisor: Dr. Yew San Hor)
Jordan Stevens: Early Dark Energy in light of precise cosmological observations
(Advisor: Dr. Shun Saito)
Title: Superconducting Energy Gap Probed by Particle Irradiations
Recent discovery of the room temperature superconductor [1] has attracted renewed attention in the field of superconductivity. The superconductor is a material that shows zero resistivity and
Meissner effect below its critical temperature. These unique properties are originated from the pairing of electrons below its critical temperature by opening a superconducting energy gap. Thus,
investigating the symmetry of the superconducting energy gap is the first and most important step to understand its properties.
High-energy particle irradiations (electron, proton, heavy ion, etc.) have been used to study various superconductors. Especially, the electron irradiation was effectively used to identify the
symmetry of superconducting energy gaps. In this talk, I will describe how the particle irradiations help understand the superconducting energy gaps of (Ba[1-x]K[x])Fe[2]As[2] iron-based
superconductor [2], NbSe[2] [3], and YBa[2]Cu[3]O[7-][δ ][4].
[1] E. Snider et al., Nature 586, 373 (2020).
[2] K. Cho, et al., SCIENCE ADVANCES 2, e1600807 (2016).
[3] K. Cho, et al., Nature Communications 9, 2796 (2018).
[4] K. Cho, et al., Phys. Rev. B 105, 014514 (2022).
|
{"url":"https://physics.mst.edu/currentcourses/seminars/colloquium2022s/","timestamp":"2024-11-07T17:08:22Z","content_type":"text/html","content_length":"129174","record_id":"<urn:uuid:4847b698-8d0b-46e7-a62f-0b32c8673443>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00849.warc.gz"}
|
Cube Calendar
A desk calendar can be made using two cubes to show the day of the month.
What numbers would be on the faces of the cubes so that each date from 1 to 31 can be displayed?
A Mathematics Lesson Starter Of The Day
Topics: Starter | Problem Solving | Puzzles
How did you use this starter? Can you suggest how teachers could present or develop this resource? Do you have any comments? It is always useful to receive feedback and helps make this free resource
even more useful for Maths teachers anywhere in the world.
Click here to enter your comments.
Sign in to your Transum subscription account to see the answers
Use the printable net above to make your orn cube calendar. A cassette case (Are you old enough to remember those?) makes an excellent stand for your cubes.
Your access to the majority of the Transum resources continues to be free but you can help support the continued growth of the website by doing your Amazon shopping using the links on this page.
Below is an Amazon link. As an Amazon Associate I earn a small amount from qualifying purchases which helps pay for the upkeep of this website.
Educational Technology on Amazon
Teacher, do your students have access to computers such as tablets, iPads or Laptops? This page was really designed for projection on a whiteboard but if you really want the students to have access
to it here is a concise URL for a version of this page without the comments:
However it would be better to assign one of the student interactive activities below.
Here is the URL which will take them to a different type of calendar net.
Here is the URL which will take them to a calendar investigation.
Hint: For those of you who do not have a Transum subscription you might be interested to know that the solution to this puzzle involves using a font such that the six, when turned updide down, can
also be used as a nine!
|
{"url":"https://transum.org/Software/sw/Starter_of_the_day/Starter_December30.asp","timestamp":"2024-11-09T04:51:39Z","content_type":"text/html","content_length":"23088","record_id":"<urn:uuid:fddc7fb1-2172-4069-8e66-38c4ca264645>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00079.warc.gz"}
|
How to Calculate Sigma in Excel (3 Effective Methods) - ExcelDemy
We have the Student Name in column B, ID in column C, and Mark in column D. We’ll calculate the sigma value for marks.
Method 1 – Calculating Sigma in Excel Manually
• Insert the following formula in cell D11.
• Press Enter and you will get the result for this cell.
• Insert the following formula in cell E5.
• You will get the result for this column.
• Enter the following formula in cell F5.
• Use the Fill Handle to apply the formula to all cells.
• Insert the following formula in cell D12.
• You will get the result for this cell.
• Insert the following formula in cell D13.
Method 2 – Using the STDEVP Function to Calculate Sigma in Excel
• Insert the following formula in cell D11.
• Press Enter and you will get the final result.
Method 3 – Calculating Sigma for Distributed Frequency
• Arrange the dataset similarly to the below image. We have Year in column B, Runs in column C, and Number of Batters in column D.
• Insert the following formula in cell E5.
• Use the Fill Handle to apply the formula to all cells in the column.
• You will get the result for the whole column.
• Insert the following formula in cell C13.
• Press Enter and you will get the result for this cell.
• Insert the following formula in cell F5.
• Use the Fill Handle to apply the formula to all cells.
• Insert the following formula in cell C14.
• Press Enter and you will get the result for this cell.
• Insert the following formula in cell C15.
• You will get the final result.
Read More: How to Do 6 Sigma Calculation in Excel
Download the Practice Workbook
Calculate Sigma in Excel: Knowledge Hub
<< Go Back to Excel for Statistics | Learn Excel
Get FREE Advanced Excel Exercises with Solutions!
We will be happy to hear your thoughts
Leave a reply
|
{"url":"https://www.exceldemy.com/learn-excel/statistics/sigma-calculation/","timestamp":"2024-11-07T21:55:22Z","content_type":"text/html","content_length":"195700","record_id":"<urn:uuid:fc341b62-e4e7-4b9e-a694-4ec54081fe86>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00550.warc.gz"}
|
Bol Mega Sudoku 16X16 Large Print Extreme Volume 60 276 | Sudoku Printables
Bol Mega Sudoku 16X16 Large Print Extreme Volume 60 276
Bol Mega Sudoku 16X16 Large Print Extreme Volume 60 276 – If you’ve had any issues solving sudoku, you’re aware that there are several kinds of puzzles to choose from which is why it’s difficult for
you to pick which ones you’ll need to solve. But there are also many different ways to solve them. Moreover, you’ll discover that a printable version will prove to be a great way to get started. The
rules to solve sudoku are the same as those for other types of puzzles, however, the exact format differs slightly.
What Does the Word ‘Sudoku’ Mean?
The word ‘Sudoku’ is taken from the Japanese words suji and dokushin, which mean “number” as well as “unmarried in the respective senses. The goal of the game is to fill every box with numbers such
that each numeral from one to nine appears only one time on each horizontal line. The term Sudoku is an emblem associated with the Japanese puzzle firm Nikoli, which originated in Kyoto.
The name Sudoku comes of the Japanese word”shuji wa Dokushin Ni Kagiru meaning ‘numbers have to be single’. The game consists of nine 3×3 squares with nine smaller squares inside. In the beginning,
it was called Number Place, Sudoku was a mathematical puzzle that stimulated development. Although the exact origins of the game are unknown, Sudoku is known to have deep roots in ancient number
Why is Sudoku So Addicting?
If you’ve ever played Sudoku and you’ve played it before, you’ll be aware of how addictive this game can be. An Sudoku addict won’t be able to put down the thought of the next challenge they’ll
solve. They’re always thinking about their next adventure, while other aspects of their life tend to be left to by the wayside. Sudoku is a very addictive game, but it’s important for players to
maintain the addicting potential of the game under control. If you’ve developed a craving for Sudoku here are some ways to stop your addiction.
One of the most popular methods of determining if you’re addicted to Sudoku is by observing your behavior. Most people carry magazines and books with them, while others simply browse through social
news posts. Sudoku addicts carry newspapers, books exercise books, and smartphones everywhere they travel. They are constantly working on puzzles and don’t want to stop! Some even discover it is
easier to complete Sudoku puzzles than standard crosswords. They simply can’t stop.
16×16 Sudoku Printable
What is the Key to Solving a Sudoku Puzzle?
An effective method for solving a printable sudoku puzzle is to practice and experiment with various methods. The most effective Sudoku puzzle solvers do not apply the same approach to every single
puzzle. It is important to try out and test various approaches until you can find the one that is effective for you. After some time, you’ll be able to solve sudoku puzzles without a problem! But how
do you know how to solve an printable Sudoku game?
In the beginning, you must grasp the fundamental concept behind suduko. It’s a game of reasoning and deduction and requires you to examine the puzzle from different angles to identify patterns and
solve it. When you are solving Suduko puzzles, suduko puzzle, you should not try to guess the numbers. instead, you should scan the grid to find patterns. This method to rows and squares.
Related For Sudoku Puzzles Printable
|
{"url":"https://sudokuprintables.net/16x16-sudoku-printable/bol-mega-sudoku-16x16-large-print-extreme-volume-60-276-2/","timestamp":"2024-11-05T00:53:12Z","content_type":"text/html","content_length":"25965","record_id":"<urn:uuid:2caecbc8-3213-455a-be66-926a1f264484>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00342.warc.gz"}
|
Difference between S_x S_x +S_y S_y and S+S- +S-S+ for MPO
I am trying to create an MPO with 3 or more spin interactions of type Sx Sy Sz with x, y and z at different sites and then find correlation of type Sx Sy Sz.
As a benchmark, first I am trying to study Heisenberg model and correlation function.
1.I am getting a different value of correlation function if I use Sx Sx +Sz Sz instead of usual S+S- +S-S+ for MPO. Why am I getting different results for correlation? Is there anything which I can
do such that using Sx, Sy in MPO will provide me same results as S+S-?
1. Also when I am trying to calculate correlation of terms like Sx Sx or Sy Sy, should I convert them also to S+S- form?
Using Sx Sy terms instead of S+ S- term in MPO will simplify my code if I have 3 or more spin interaction .
Please suggest something which can help me understand and simplify things.
Thank you,
Here is snapshot of my code:
(int j = 1; j <= N-4; j += 1)
ampo += -J3,"Sx",j,"Sy",j+1,"Sz",j+2;
ampo += -J3,"Sy",j,"Sz",j+1,"Sx",j+2;
ampo += -J3,"Sz",j,"Sx",j+1,"Sy",j+2;
ampo += J3,"Sx",j,"Sz",j+1,"Sx",j+2;
ampo += J3,"Sy",j,"Sx",j+1,"Sz",j+2;
ampo += J3,"Sz",j,"Sy",j+1,"Sx",j+2;
ampo += -J3,"Sx",j+1,"Sy",j+2,"Sz",j+3;
ampo += -J3,"Sy",j+1,"Sz",j+2,"Sx",j+3;
ampo += -J3,"Sz",j+1,"Sx",j+2,"Sy",j+3;
ampo += J3,"Sx",j+1,"Sz",j+2,"Sx",j+3;
ampo += J3,"Sy",j+1,"Sx",j+2,"Sz",j+3;
ampo += J3,"Sz",j+1,"Sy",j+2,"Sx",j+3;
Hi, thanks for the question.
So I believe the answer to your first question is that S+S- + S-S+ is proportional to SxSx + SySy, not SxSx+SzSz. So unless your wavefunction is perfectly spin-rotationally invariant you will in
general get a different answer when measuring these operators which are different.
You don’t have to convert operators to S+ S- form if you aren’t conserving the total Sz quantum number. However if you are conserving total Sz, you must input your Hamiltonian in terms of operators
which change total Sz by a well-defined amount, so mainly Sz, S+, and S- (and not Sx or Sy).
Best regards,
Thanks a lot for your reply, Miles. Yes you are right, Sx Sx +Sy Sy is proportional to S+S- + S-S+, I made a mistake while writing. My Hamiltonian is SU(2) symmetric, I got different result if I use
ampo += J3,"Sx",j,"Sz",j+1,"Sx",j+2; format than replacing Sx and Sy terms by combination of S+ +iS-/ S+ +iS-.
|
{"url":"http://itensor.org/support/1782/difference-between-s_x-s_x-s_y-s_y-and-s-s-s-s-for-mpo","timestamp":"2024-11-12T15:13:32Z","content_type":"text/html","content_length":"24487","record_id":"<urn:uuid:b686bff6-82b1-49cc-b3ea-179fec1187bf>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00730.warc.gz"}
|
Experimental Validation of Graph-Based Hierarchical Control for Thermal Management
This paper proposes and experimentally validates a hierarchical control framework for fluid flow systems performing thermal management in mobile energy platforms. A graph-based modeling approach
derived from the conservation of mass and energy inherently captures coupling within and between physical domains. Hydrodynamic and thermodynamic graph-based models are experimentally validated on a
thermal-fluid testbed. A scalable hierarchical control framework using the graph-based models with model predictive control (MPC) is proposed to manage the multidomain and multi-timescale dynamics of
thermal management systems. The proposed hierarchical control framework is compared to decentralized and centralized benchmark controllers and found to maintain temperature bounds better while using
less electrical energy for actuation.
Issue Section:
Research Papers
Modern aircraft and large ground vehicles are complex machines consisting of interconnected subsystems that interact in multiple energy domains with a variety of dynamic timescales. A longstanding
trend of these systems is increasing electrical power requirements, necessarily accompanied by increased thermal loading [1,2]. This additional burden on thermal management systems is compounded by
the continual desire to reduce the size and weight of mobile platforms, and further exacerbated in aviation by a decreased ability to remove thermal energy due to the use of composite skin materials
with high thermal resistance and reduction in ram air heat exchanger cross sections [3]. Many approaches have been explored to meet this challenge, with a focus on employing transient analyses to
improve upon traditional steady-state analyses [4]. These approaches include integrated aircraft modeling [3], characterization of component performance under transient operation [5,6], design
optimization [7], improved topologies for thermal management [8], enhanced mission planning [9], and advanced control approaches [10–13].
Due to the inherent complexity and multidomain nature of thermal management systems, modular modeling frameworks have been developed to support system and control design. A graph-based approach to
modeling not only provides this modularity and scalability but can also readily facilitate model-based control design, as shown in Ref. [14] for building thermal systems, [15] for process systems,
and [12] and [16] for mobile thermal energy management systems.
To validate both modeling and control approaches for energy systems, experimental testbeds have been developed across a range of application areas. Examples include the vapor compression
refrigeration testbeds of Refs. [17] and [18], the hydraulic hybrid vehicle testbed of Ref. [19], the aircraft fuel thermal management system (FTMS) testbed of [12], and the shipboard chilled water
distribution system testbed of Ref. [20].
This paper proposes a hierarchical framework to address the challenge of controlling the multiple timescales and energy domains present in fluid-based thermal management systems. Hierarchical control
approaches have been widely investigated in the literature, including [21–25]. While much of this work focuses on deriving analytical guarantees of stability and/or robustness under appropriate
assumptions, practical demonstration of modeling and hierarchical control frameworks in application to physical systems is not readily found in the literature. Therefore, the purpose of this paper is
to provide such a demonstration. The graph-based model validation of Ref. [26] is first extended to a more complex testbed configuration. Then, this paper builds upon the initial simulation study of
Ref. [13] by experimentally demonstrating the hierarchical control approach and comparing this to several baseline control approaches. In comparison to these benchmarks, the hierarchical controller
is found to maintain temperature bounds better while using less energy for actuation.
The remainder of this paper is organized as follows. Section 2 presents the graph-based modeling framework. Model validation on an experimental testbed is shown in Sec. 3. Section 4 presents the
proposed hierarchical control framework, and Sec. 5 discusses the baseline decentralized and centralized control approaches. Section 6 experimentally demonstrates and compares the three control
approaches. Conclusions are provided in Sec. 7.
Notation: A vector x of elements x[i] is denoted by $x=[xi]$, while $M=[mij]$ denotes a matrix M of elements m[ij]. Lower case superscripts are used throughout this paper in the naming of variables,
while upper case superscripts indicate mathematical functions, such as a transpose.
Graph-Based Dynamic Modeling
Generic Graph Formulation.
The graph-based models of this paper are derived by applying conservation equations to a component or system, inherently capturing the storage and transport of mass or energy. When modeling a system
as a graph, capacitive elements that store energy, or mass, are represented as vertices and the paths for the transport of energy, or mass, between storage elements are represented as edges. A key
feature of graph-based models that makes them attractive for model-based hierarchical control is that a graph-based model of a complete system can easily be partitioned into separate models for
individual subsystems by clustering the graph into subgraphs based on an analysis of the edges and vertices. Edges that are cut because of this partitioning represent coupling terms between the
subsystems, for which controllers of the hierarchy can account by exchanging information. This ease of partitioning reduces the labor required to generate the many models used throughout a
model-based hierarchical control framework.
Let the oriented graph $G=(v,e)$ represent the structure of storage and exchange of a conserved quantity throughout a system $S$. Graph $G$ is said to be of order N[v] with vertices $v=[vi],i∈[1,Nv]
$, and of size N[e] with edges $e=[ej],j∈[1,Ne]$. As shown in the notional graph example of Fig. 1, each edge e[j] is incident to two vertices and indicates directionality from its tail vertex
$vjtail$ to its head vertex $vjhead$. The set of edges directed into vertex v[i] is given by $eihead={ej|vjhead=vi}$, while the set of edges directed out of vertex v[i] is given by $eitail={ej|
vjtail=vi}$ [27].
Each vertex
of graph
has an associated dynamic state in system
, denoted as
, representing an amount of stored energy or mass. Similarly, each edge
has an associated value
describing the transfer rate of energy (i.e., power flow) or mass between adjacent vertices. The orientation of each edge indicates the direction of positive flow, from
. Therefore, the dynamics of each vertex satisfy the conservation equation
where C[i] is the storage capacitance of the ith vertex.
The transfer rate
along each edge is a function of the states of the vertices to which it is incident and may also be a function of an input signal
. The transfer rate along each edge is, therefore, given by
Figure 1 includes examples of Eqs. (1) and (2) as applied to several vertices and edges.
In addition to capturing the exchange of energy or mass within the graph, the modeling framework must account for exchange with entities external to the graph. Sources to graph $G$ are modeled by
source edges $ein=[ejin],j∈[1,Nin]$ with associated power flows $yin=[yjin]$, which are treated as disturbances to the system that may come from neighboring systems or the environment. Therefore,
edges belonging to e^in are not counted among the edges e of graph $G$, and transfer rates in y^in are not counted among the internal transfer rates y of system $S$.
Sinks of graph $G$ are modeled by sink vertices $vout=[vjout],j∈[1,Nout]$ with associated states $xout=[xjout]$. The sink vertices are counted among the vertices v of graph $G$ but the sink states x
^out are not included in the state vector x of system $S$. Instead, the sink states x^out are treated as disturbances to the system associated with neighboring systems or the environment.
To describe the structure of edge and vertex interconnections of a graph, the incidence matrix
is defined as
$mij={+1vi is the tail of ej−1vi is the head of ej0else$
can then be partitioned as
$M=[M¯M¯] with M¯∈ℝ(Nv−Nout)×Ne$
where the indexing of edges is assumed to be ordered such that $M¯$ is a structural mapping from power flows y to states x, and $M¯$ is a structural mapping from y to sink states x^out.
Similarly, the connection of external sources to the system is given by
$dij={1vi is the head of ejin0else$
Following from the conservation equation for each vertex (1) and the above definitions of
, the dynamics of all states in system
are given by
where $C=diag([Ci])$ is the diagonal matrix of capacitances.
Following from Eq.
, the vector of all power flows
is given by
Domain-Specific Modeling.
An exposition of how conservation of mass and energy can be applied to derive lumped parameter graph-based models for a variety of fluid-thermal components is found in Ref. [16]. These components
include a fluid reservoir, a flow split/junction, a pump, a pipe, a cold plate (CP) heat exchanger, and a liquid-to-liquid brazed-plate heat exchanger. Figure 2 shows the mass conservation and
thermal energy conservation graphs for each component. For graphs in the hydraulic domain, vertices represent dynamic states of pressure, while edges represent the rate of mass transfer between
vertices. For graphs in the thermal domain, vertices represent dynamic states of temperature, while edges represent thermal power flow between vertices due to convection in heat exchangers (HXs) or
fluid transport. Dashed lines, indicating sources or sinks of each component, consist of variables determined by neighboring components or disturbances.
Hydraulic Graph Modeling.
For notation purposes, a superscript m is used to denote capacitances, functions, and inputs associated with hydraulic graphs. The reader is referred to Ref. [16] for a detailed derivation of the
model equations that follow.
For all hydraulic vertices except those of a reservoir, the hydraulic capacitance is given by $Cim=Viρ/E$, where V is the fluid volume in the component and both the fluid density ρ and the bulk
modulus E are assumed to be constant. For reservoirs, $Cim=Ac,i/g$, where A[c] is the reservoir cross-sectional area and g is the gravitational constant.
For all hydraulic edges except those of a pump, the mass flow rate
is specifically given by
, and
are the fluid flow length, diameter, and cross-sectional area, respectively,
is the height difference between the inlet and outlet flow,
is the friction factor, and
is the minor loss coefficient. For pumps, the mass flow rate is given by
where the pump head
is determined using an empirical map as a linear function of pump speed
and pressure differential across the pump
where $α1, α2$, and $α3$ are constants.
When hydraulic graphs of multiple components are interconnected to represent a system, the hydrodynamics can be represented in the form of Eqs.
. The fluid system configuration used for demonstration in this paper consists of closed loops such that fluid mass does not enter or exit the system. Thus in the notation of the hydraulic graph
variables, Eq.
reduces to
And Eq.
is given by
Thermal Graph Modeling.
For notation purposes, a superscript t is used to denote capacitances and functions associated with thermal graphs. The reader is referred to Ref. [16] for a detailed derivation of the model
equations that follow.
For all vertices associated with a fluid temperature, the thermal capacitance is given by $Cit=ρVicp$, where the specific heat capacitance of the fluid c[p] is assumed to be constant. For all
vertices associated with heat exchanger wall temperatures, $Cit=Mw,icp,w$, where M[w] is the mass of the wall and $cp,w$ is the specific heat capacitance of the wall material.
The thermal power flow
is specifically given by
for thermal power flow due to fluid flow, and by
for convective thermal power flow in HXs, where
is the convective surface area and
is the heat transfer coefficient, approximated in this paper as a function of mass flow rate and temperature as
, where
are constants.
When thermal graphs of multiple components are interconnected to represent a system, the thermodynamics can be represented in the form of Eqs.
. In the notation of the thermal graph variables, Eq.
is given by
and Eq.
is given by
Multigraph System Representation.
Table 1 summarizes the quantities associated with each element of the graph-based framework, including in the generic sense of Sec. 2.1 and in the specific hydraulic and thermal domains of Secs. 2.3
and 2.4, respectively. The combined hydraulic and thermal system dynamics may be represented as two coupled graphs, as shown in Fig. 3. The hydraulic graph, denoted as $Gm$ in Fig. 3, is governed by
mass conservation laws, while the thermal graph, denoted as $Gt$, is governed by energy conservation laws. It is assumed in this paper that the coupling between these two graphs is limited to the
unidirectional influence of hydrodynamics on thermodynamics.
Table 1
Generic graph, $G$ Hydraulic graph, $Gm$ Thermal graph, $Gt$
Conserved quantity Mass Thermal energy
Vertex storage state, x Pressure, p Temperature, T
Edge transfer rate, y Mass flow rate, $m˙$ Power flow, P
Edge input, u Actuator effort, u^m Mass flow rate, $m˙t$
Generic graph, $G$ Hydraulic graph, $Gm$ Thermal graph, $Gt$
Conserved quantity Mass Thermal energy
Vertex storage state, x Pressure, p Temperature, T
Edge transfer rate, y Mass flow rate, $m˙$ Power flow, P
Edge input, u Actuator effort, u^m Mass flow rate, $m˙t$
and Fig.
show that the mass flow rates calculated along the edges of
become inputs to the edges of
. However, the edges in
may not align one-to-one with the edges in
, in particular when a single mass flow rate affects multiple edges of the thermal graph. It is also possible that some mass flow rate inputs to the thermal system originate from its surroundings and
are not modeled within the hydraulic graph. For example, this could include flow rates on the secondary side of HXs by which thermal energy is transferred to and from neighboring systems. These
external flow rates are denoted by
and treated as disturbances to the thermal model. The mass flow rates within
can then be mapped to the input mass flow rates of
where $Z∈ℝNet×(Nem+Next)$.
To capture the dynamics of pumps, including rate limits and time delays between each pump command $uip,i∈[1,Np]$ and the actual pump effort $uim$ affecting the hydraulic graph, each $uim$ is paired
with a single-input single-output (SISO) system $Sip$ as shown in Fig. 3. Each $Sip$ models the state of the ith pump $uim$ as a function of its commanded value $uip$.
In this paper, pump states and inputs are expressed in units of % duty cycle of PWM. The dynamic of each pump is modeled as a first-order response with time constant
and delay
, given as a transfer function by
Hydraulic Graph Linearization.
In general, the graph-based models have a nonlinear form but satisfy the generic relationships of Eqs. (1) and (2) for each vertex and edge. For control design, in particular, it is often useful to
use a linear representation of the system dynamics. A benefit of the graph-based approach is that a linear model of the full system can be generated by individual linearization of each edge
From Eqs.
, the nonlinear hydraulic mass flow rate equations for all components follow the general form:
where the coefficients
are constant for each
i, j
. Linearizing this expression about an equilibrium operating condition using a first-order Taylor Series gives linear mass flow rate equations of the form
where the $Δ$ denote deviations from the linearization point and a[j], b[j] are constant coefficients.
By substitution into Eq.
, the linear equations for a complete hydraulic system model are given by
$M¯̃m$ represents the columns of $M¯m$ corresponding to edges associated with pumps, and $b̃m$ is the vector of input coefficients for edges associated with pumps (i.e., edges j for which $c4,j≠0$ in
Eq. (19)).
The output equation of the linearized hydraulic model relating pressures and pump efforts to mass flow rates is given by
$djkm={bjmif edge j is associated with pump k0else$
Thermal Graph Linearization.
From the power flow equations detailed in Eqs.
and assumed expression for heat transfer coefficient, the nonlinear power flow equations for all components follow the general form
where the coefficients
are constant for each
i, j
. Linearizing this expression about an equilibrium operating condition using a first-order Taylor Series gives linear power flow equations of the form
By substitution into Eq.
, the linear equations for a complete thermal system model are given by
is a weighted incidence matrix for the thermal graph with
$mi,jt={a1,jtif vi is the tail of eja2,jtif vi is the head of ej0else$
Modeling Example and Validation
Example Configuration Description.
Appendix A describes the experimental testbed used for validation in this paper. The testbed is pictured in Fig. 4 in an example system configuration used for demonstration throughout this paper.
The corresponding schematic is shown in Fig. 5. This configuration is notionally representative of a simplified aircraft FTMS in which heat from actuators, generators, engine oil, and other transient
loads is absorbed, stored in liquid fuel, and rejected through transfer to neighboring systems [28].
The sample configuration has eight pumps arranged in four sets of two in series. In this paper, the a and b pumps of each set receive the same commands. Therefore, for notational convenience, the two
pumps in each set are referred collectively. For example, pumps 1a and 1b are collectively termed as “pump 1.”
The secondary loop (identified as the left half of the system in Fig. 5) absorbs thermal loads from the heaters mounted to cold plate 1 (CP 1), through which fluid is driven by pump 1. This loop has
dedicated thermal storage available in reservoir 2, and the ability to exchange thermal energy with the main loop across brazed-plate heat exchanger 1 (HX 1) with fluid driven by pump 2.
The primary loop (identified as the right half of the system in Fig. 5) includes two parallel fluid flow paths out of reservoir 2. The path driven by pump 3 passes through heat exchanger 1,
exchanging thermal energy with the secondary loop. The path driven by pump 4 passes through CP 2 and CP 3 absorbing thermal energy produced by their heaters. The two flow paths then junction and pass
through HX 2, by which thermal energy is transferred out of the system to the thermal sink.
Graph-Based Representation of Example Configuration.
The hydrodynamics of the example testbed configuration in Fig. 5 are represented by the system graph shown in Fig. 6, formed by the interconnection of the individual hydraulic component graphs from
Fig. 2. This hydraulic graph consists of 32 vertices and 34 edges, which in turn set the number of pressure states and mass flow rates in the corresponding graph-based hydraulic model.
shows the thermal graph for the example testbed configuration, formed by the interconnection of the individual thermal component graphs from Fig.
. The edges exiting the three leftmost dashed vertices indicate heat transfer from the resistive heaters to the cold plates, treated as disturbances to the system. The two edges on the right side of
the graph connected to dashed vertices denote source and sink power flows from and to the chiller, which is treated in this work as an infinite source/sink of thermal energy. Thus in Eq.
where each Q is the heat load to the corresponding cold plate heat exchanger, $m˙ext$ is the mass flow rate of chilled fluid through the right side of HX 2, and T[c] is the temperature of the fluid
exiting the chiller and entering the right side of HX 2. The thermal graph consists of 39 vertices (one of which is a sink vertex), 41 edges, and 4 source edges. This results in a corresponding
graph-based thermal model with 38 temperature states and 41 thermal power flows.
Experimental Validation of Example Configuration.
Figure 8 shows the commands and disturbances applied to the experimental system and models for validation. The linearization point used for the linear models is the steady-state operating condition
of the nonlinear models subject to commands and disturbances that fall approximately in the middle of the operating range. To demonstrate the repeatability of the system across multiple runs, five
experimental trials were conducted with the same commanded sequence. The traces for the chiller outlet temperature of Fig. 8 show the envelope between the maximum and minimum values measured at each
time among the five trials.
The heat loads plotted in Fig. 8 are translated into a reference current for each bank of resistive heaters using an empirical map between the applied electrical current and the achieved thermal
load. Each reference current is tracked by proportional–integral (PI) control of the corresponding solid-state relay.
The chiller is set to track a temperature set point of 20°C. Figure 8 shows that error from this set point of about $0.5$°C on average is present due to measurement and tracking error within the
chiller unit's internal controller.
Figures 9(a) and 9(b) show a selection of hydraulic and thermal signals, respectively, that result from applying the inputs and disturbances of Fig. 8. All experimental traces plotted show the
envelope between the maximum and minimum values measured at each time among five experimental trials. To make the width of these envelopes more clear, a closer view of several signals is provided in
Fig. 10, which demonstrates that the testbed exhibits a high degree of repeatability.
Figure 9(a) demonstrates a close matching between the experimental data and the hydraulic graph-based models. While offset occurs at times between the models and data, this is generally small
relative to the magnitude of the gains when commands change. Where differences exist between the two models, especially in the traces for the pumps 2b and 3b mass flow rate, the nonlinear model is
more accurate. This is due to the error incurred by linearization of the terms under the square root in Eqs. (8) and (9).
Figure 9(b) similarly demonstrates a high degree of accuracy in the nonlinear thermal graph-based model. The discrepancies that occur can be attributed to unmodeled friction/losses and errors in heat
transfer coefficients. However, in the interval from 750 to 850s significant error occurs for the linear model in signals pertaining to cold plates of the primary loop. This is largely due to the
combination of a low speed in pump 3 and a high speed in pump 4, which falls far from the linearization conditions. Linearization of the inherently bilinear fluid-thermal power flow equation $P=
m˙cpT$ decouples the relationship between mass flow rates and temperatures, and this can result in large error under some operating conditions. However, the linear thermal model still preserves the
correct gains during this time interval, as is critical to the design of stabilizing model-based controllers. The accuracy of the linear model at most other times across the 1000s mission is close
to that of the nonlinear model. While these models could be improved at the cost of increased complexity, the accuracy of the proposed graph-based models is sufficient for their intended use in
closed-loop control.
Graph-Based Hierarchical Control
Overview of Hierarchical Control Framework.
Figure 11 diagrams the hierarchical control framework developed for the example testbed configuration. This framework consists of four layers, as enclosed by the dashed box. The update period of each
control layer is included in Fig. 11 below its name. Lower layers have faster update rates because they are responsible for governing dynamics of faster timescales, progressing from slow thermal
dynamics, to faster thermal dynamics, to hydrodynamics, and finally to pump dynamics in the pump control layer.
The thermal system control layer at the top of the hierarchy is responsible for coordinating the overall long-term thermal behavior of the system, leveraging preview of expected thermal disturbances
for the mission at hand. Using model predictive control (MPC) with an update interval of 80s, this controller is designed to primarily manage slow thermal dynamics, such as reservoir fluid
temperatures, cold plate wall temperatures, and heat exchanger wall temperatures.
While the thermal system control layer plans for the coordination of the full thermal system over a long-time horizon, its slow update rate prohibits governing faster system dynamics or quickly
compensating for model error or differences between preview of expected future disturbances and the true disturbances that affect the system. However, simply increasing the update rate of the thermal
system control layer while maintaining the same time horizon may not be computationally tractable for complex systems, as this would increase the number of prediction steps to be solved by the
optimization program while decreasing the time between consecutive updates in which the program must be solved. This motivates the introduction of the faster updating thermal subsystem control layer.
The thermal subsystem control layer consists of two MPC controllers governing the thermal dynamics of the primary and secondary loops. The graph of the full thermal system in Fig. 7 is partitioned
into separate graphs for each loop. Using the same general control formulation as for the thermal system controller, graph-based thermal controllers of a 2s update period are designed for each of
the thermal subsystems.
References to be tracked for select temperatures and thermal power flows are communicated from the thermal system control layer to the thermal subsystem control layer. This leverages the system-level
coordination and long-time horizon in the thermal system controller, while the faster update rate of the thermal subsystem layer compensates for model and preview error to achieve thermal objectives.
Because long-horizon planning is performed by the thermal system controller, the thermal subsystem controllers can be constructed with a relatively small number of steps in their prediction horizons.
The third layer in the hierarchy is the hydraulic subsystem control layer, which like the thermal subsystem control layer is partitioned into subsystems corresponding to each fluid loop. References
for mass flow rates are sent from each controller in the thermal subsystem control layer to the corresponding controller in the hydraulic subsystem control layer. In this paper, we assume that only
pressure measurements are available as hydraulic feedback, as pressure sensors may be preferred over mass flow rate sensors in implementation for their reduced cost, increased accuracy, and faster
response time.
The controllers in the hydraulic subsystem control layer determine references for the pump states. In the pump control layer, a set of decoupled SISO controllers track the desired pump states by
commanding the pump inputs, compensating for dynamics and time delays within each pump.
Thermal System Control Layer Formulation.
The thermal system control layer leverages available preview of upcoming thermal loads in calculating references for temperatures and power flows over a prediction horizon. The primary objective of
thermal management is to regulate temperature states of the system
such that
are lower and upper bounds, respectively, on the
temperature. However, a secondary objective to maintaining temperature constraints is keeping the mass flow rates required to achieve the references small when possible, reducing the pump effort
required for the system. The MPC in the thermal control layer solves the following constrained nonlinear program:
subject to
In Eqs. (32)–(33), $Nht$ is the number of steps in the prediction horizon of the thermal system control layer MPC. The subscript “preview” is included in some variables to indicate that preview of
expected values for these signals is assumed to be available over the time horizon of the controller as part of the system's planned mission. Equation (32) defines the cost function as minimizing the
thermal slack variable $st=[sit],i∈[1,Nvt]$ with weighting $λst$, the mass flow rates under control $m˙$ with weighting $λut$, the tracking error of a subset of system temperatures T^track from
references T^ref with weighting $λvt$, the tracking error of a subset of thermal power flows P^track from references P^ref with weighting $λet$, and the sum of the differences between mass flow
rates in consecutive steps with weighting $λdt$. The last of these objectives is used to ensure that control decisions remain smooth in time across the prediction horizon.
Equation (33a) sets the temperatures of the first step in the horizon equal to the current estimated values T[est]. Equations (33b) and (33c) define s^t as measuring the extent by which temperature
constraints are violated and impose that $sit(k)≥0 ∀i,k$. Equation (33d) imposes a discrete form of the linear thermal graph-based model from Eq. (28). Equation (33e) imposes the nonlinear thermal
power flow equation from Eq. (16) to compute those power flows with tracking objectives P^track. While the linearized thermal power flow equation of Eq. (27) could be used instead, the algebraic
nature of this equation means that large error can occur at any steps in the horizon when values of $m˙t$ are far from their linearization points, and so when computational resources are sufficient
to employ a nonlinear solver, the nonlinear power flow model is preferred.
Equations (33f) and (33g) constrain the mass flow rates of the system. From Eqs. (17), (33f) defines the relationship between the mass flow rates that affect the thermal system $m˙t$, the mass flow
rates controlled by the hydraulic system $m˙$, and the mass flow rates that are external disturbances to the system $m˙ext$.
Equation (33g) defines the envelope of allowable mass flow rates. The presence of flow splits and junctions in the fluid loops means that it is possible for fluid to flow in the reverse direction
from the orientation indicated by the arrows in Fig. 5. As reverse flow would typically be undesirable in an aircraft FTMS, constraints must be included in the controllers to avoid this behavior. In
addition, constraints are required to define the envelope of achievable mass flow rates in each loop. The approach used to determine these constraints is detailed in Appendix B. For the thermal
system model, $m˙=[m˙prim˙ sec ], H=[Hpri00H sec ]$, and $z=[zpriz sec ]$ from Eqs. (B1) and (B2) of Appendix B.
Finally, Eq. (34) dictates that the mass flow rates chosen for the first step in the prediction horizon $m˙(0)$ be equal to those for the corresponding time chosen in the previous iteration of the
controller $m˙last(1)$. This is performed under the assumption that the controller is afforded one step in duration to calculate a solution, after which the controller output decisions corresponding
to the k=1 step are applied.
Thermal Subsystem Control Layer Formulation.
To create a model for each subsystem controller, the system graph in Fig. 7 is partitioned into subsystem graphs corresponding to the primary and secondary loops. The vertex corresponding to the wall
temperature of HX 1, which couples the thermal dynamics of the two loops, is represented as a sink vertex in each subsystem graph.
Select temperatures and thermal power flows determined in the thermal system control layer are passed as references to the thermal subsystem control layer. Temperature references are downsampled to
the update rate of the latter using a first-order hold, while power flow references are downsampled using a zero-order hold because they are calculated with an algebraic equation and can change
quickly due to the dependence on mass flow rates.
The MPC formulation for each of the thermal subsystem controllers is identical to that in Eqs. (32)–(34) subject to use of signals and model parameters corresponding to the thermal subsystem
graph-based models diagrammed in Fig. 7. For the primary thermal subsystem $m˙=m˙pri,H=Hpri,$ and $z=zpri$, and for the secondary thermal subsystem $m˙=m˙ sec ,H=H sec ,$ and $z=z sec$.
Hydraulic Control Layer Formulation.
Mass flow rates determined in the thermal subsystem control layer are passed as references to the hydraulic subsystem control layer. These references are downsampled to the update rate of the
hydraulic subsystem controllers using a zero-order hold. Each of the MPC controllers in the hydraulic control layer solves a constrained quadratic program using a discrete form of the linear
hydraulic graph-based model from Eqs. (21) and (23). The formulation for the hydraulic controllers is found in Ref. [13].
Pump Control Layer Formulation.
The pump control layer consists of MPC controllers for each pump. These track pump state references from the hydraulic subsystem control layer u^m by commanding the pump inputs u^p, compensating for
pump dynamics and delays. These references are downsampled to the update rate of the pump controllers using a zero-order hold. Each of the MPC controllers in the pump control layer solves a
constrained quadratic program using a discrete form of the pump model from Eq. (18). The formulation for the pump controllers is found in Ref. [13].
Benchmark Controllers
Decentralized Benchmark Control.
Figure 12 diagrams the decentralized benchmark controller for the example system configuration of this paper. Each set of pumps is paired with a PI loop that seeks to bring a temperature measurement
from the system to track a constant reference. Table 2 lists the signal used as feedback for each PI loop.
Table 2
Controller loop Pump Feedback signal Reference temp. (°C) Proportional gain Integral gain
PI 1 Pump 1 CP 1 wall temperature 42.5 3 0.05
PI 2 Pump 2 Reservoir 1 temperature 35 5 0.05
PI 3 Pump 3 HX 1 primary outlet temperature 35 5 0.05
PI 4 Pump 4 CP 3 fluid outlet temperature 40 1 0.05
Controller loop Pump Feedback signal Reference temp. (°C) Proportional gain Integral gain
PI 1 Pump 1 CP 1 wall temperature 42.5 3 0.05
PI 2 Pump 2 Reservoir 1 temperature 35 5 0.05
PI 3 Pump 3 HX 1 primary outlet temperature 35 5 0.05
PI 4 Pump 4 CP 3 fluid outlet temperature 40 1 0.05
The choice of the feedback signal for each loop reflects the primary purpose of the overall system to manage the temperature of the cold plate walls. PI 1 tracks a reference temperature for the CP 1
wall. PI 2 and PI 3 track reference temperatures for reservoir 1 and the HX 1 primary side outlet, respectively, governing the exchange of thermal energy between the secondary and primary loops by
controlling fluid flow on either side of HX 1. Because pump 4 moves fluid through both CP 2 and CP 3, using the wall temperature of just one of these cold plates as the feedback signal to PI 4 could
result in an inability to properly manage the temperature of the other cold plate. Therefore, the CP 3 outlet fluid temperature, which is affected by the wall temperature of both CP 2 and CP 3, is
used as the feedback signal to control pump 4.
Centralized Benchmark Control.
The centralized benchmark controller is identical to the thermal system control layer formulation of Sec. 4.2. The mass flow rates determined in this controller are translated into input commands to
each pump using a static mapping, as depicted in Fig. 13.
Closed-Loop Experiments
While heat loads in the experimental testbed are generated by resistive heaters, in application these would be generated by high-power electrical equipment, such as electrical actuators and
batteries, which may have tightly constrained operating temperatures. In this paper, temperature limits are embedded into thermal MPC formulations as soft constraints $T¯$ and $T¯$, parametrized in
accordance with Table 3.
Table 3
Temperature $T¯(°C)$ $T¯(°C)$
CP 1 Wall 40 45
CP 2 Wall 15 45
CP 3 Wall 15 50
All others 15 50
Temperature $T¯(°C)$ $T¯(°C)$
CP 1 Wall 40 45
CP 2 Wall 15 45
CP 3 Wall 15 50
All others 15 50
To evaluate the ability of the controllers to manage tight constraints, only 5°C separate the minimum and maximum temperature constraints for CP 1. Managing the temperature of CP 1 not only involves
proper control of the pumps in the secondary subsystem but also coordination with the primary subsystem to transfer thermal energy across heat exchanger 1, providing cooling to the secondary
The heat load to each cold plate, the temperature of the chiller outlet fluid, and the mass flow rate of the chiller outlet fluid all serve as disturbances to the system. The chiller is given a
constant temperature set point of 20°C and mass flow rate of 0.35kg/s. These are assumed to be known to the thermal MPC controllers as preview information across their prediction horizon.
Figure 14 shows the cold plate heat loads for a 4800s mission, which serves as a case study to evaluate closed-loop performance. Each heat load nominally performs several large step changes on the
order of 1kW, as shown in the top subplot of Fig. 14. However, the applied loads also include additive disturbances consisting of higher frequency uniformly distributed noise stepped every 40s of
up to 200 W in each direction, as shown in the bottom subplot of Fig. 14. The heat loads are saturated to enforce a 1.7kW maximum load to each cold plate. The nominal loading profile represents
reconfiguration of the system as mission phases change over hundreds or thousands of seconds, which in application could coincide with changes in duty cycles of electrical equipment generating heat,
charge and discharge of batteries, etc. As such, the nominal loads are assumed to be known to the thermal MPC controllers as mission preview information across their prediction horizon. However, the
higher frequency noise added to the nominal heat loads is not included in this preview information and instead serves as an unknown thermal disturbance.
Linear Kalman filters based on a discrete form of the linear graph-based models estimate the unmeasured states. These serve as the thermal state estimation and hydraulic state estimation blocks
pictured in Figs. 11 and 13.
The MPC-based controllers are formulated using the YALMIP toolbox [29]. Constrained quadratic programs are solved with the Gurobi optimization suite [30], while nonlinear programs are solved with the
IPOPT software package [31].
Decentralized Benchmark Control.
Table 2 lists the specific reference temperatures and gains for each PI controller loop, corresponding to the input/output pairings shown in Fig. 12. These were designed manually by iterating over
closed-loop simulations with the goal of bringing the system to achieve thermal objectives. The reference temperature for the CP 1 wall of 42.5°C falls exactly in the middle of its constraints. The
CP 3 fluid outlet temperature reference is 10°C below the CP 2 wall temperature upper bound, and 5°C below the CP 3 wall temperature upper bound.
Figure 15(a) shows the pump commands chosen by the decentralized PI controller in experimental application and Fig. 15(b) shows the resulting cold plate wall temperatures. While the temperatures of
CP 1 and CP 3 are held closely within their constraints, the temperature of CP 2 is not, with multiple durations of violations more than 5°C at peak. This emphasizes a drawback of employing a
decentralized single-input single-output approach to control complex systems. One actuator may be responsible for managing many elements but the control approach does not incorporate a comprehensive
awareness of the condition of all those elements. This is often the case in thermal management systems, where a single fluid line may be responsible for cooling multiple thermal loads [28].
Centralized Benchmark Control.
The centralized controller is updated every 80s. The number of steps in the prediction horizon is $Nht,sys=20$, therefore the time horizon is 1600s. For this controller, T^track consists of only
the temperature of the CP 1 wall, which tracks a constant reference of 42.5 C, and there are no thermal power flow tracking objectives. The following weightings are used in the objective function of
Eq. (32): $λst,sys=104,λut,sys=10−4, λvt,sys=105, λdt,sys=10−3$.
The contribution to the total objective function cost of each term of Eq. (32) is further scaled by the dimension and magnitude of the signals in each term. Therefore, the weightings above indicate
that the highest priority is placed via $λst,sys$ on minimizing violations of the temperature bounds. The weighting on minimizing mass flow rates $λut,sys$ is comparatively small, indicating that
this should only be done when possible without violating the temperature constraints.
Figure 16(a) shows the pump commands chosen by the centralized controller in experimental application and Fig. 16(b) shows the resulting simulated cold plate wall temperatures. Large constraint
violations occur in cold plate temperatures of the primary loop, which exceed constraints by up to 10°C. This is due to model error in the slow updating controller. During periods of constraint
violation, the controller's model predicts that the CP 2 and CP 3 wall temperatures will decrease to within constraints over the next update period, during which the mass flow rates applied are those
chosen for the second step in the horizon of the previous iteration of the controller due to the one-step delay built into the control design to accommodate computation time. This limits the ability
of the controller to leverage measurement feedback in compensating for model and preview errors.
Hierarchical Control.
The thermal system control layer of the hierarchical framework in Fig. 11 is identical to the centralized benchmark controller of the previous section. The two controllers in the thermal subsystem
control layer are updated every 2s, with $Nht,sub=2$ steps in the prediction horizon. A long-time horizon is not critical for the thermal subsystem controllers because their primary objective is to
track references from above in the hierarchy that have already been optimized over a long-time horizon.
For each thermal subsystem controller, T^track consists of references from the thermal system control layer for the temperature of all CP walls, HX walls, and reservoirs present in the subsystem.
These represent the slowest dynamics, whose behaviors evolve over the timescale of the thermal system control layer. Because the HX 1 wall is treated as a sink temperature in both subsystem thermal
graphs, the temperature for this vertex predicted by the thermal system control layer is included in the sink state preview $Tpreviewout$ to each subsystem controller. P^track for each thermal
subsystem controller consists of references for the thermal power flow along edges that couple the subsystems to each other and to the chiller. In Fig. 7, these are labeled in the primary loop as
edges 39 and 42, and in the secondary loop as edge 16. These power flow references ensure that the coordination among subsystems planned in the thermal system control layer is achieved by the thermal
subsystem control layer with no requirement for direct communication between subsystem controllers.
The following weightings are used in the objective function of Eq. (32) for each thermal subsystem controller: $λst,sub=107,λut,sub=10−6,λvt,sub=105,λet,sub=103,λdt,sub,pri=108,λdt,sub, sec =3×109$.
These weightings indicate that the highest priority is on tracking references from the layer above in the hierarchical framework.
The two controllers in the hydraulic subsystem control layer are updated every 1s, with $Nhm,sub=3$ steps. For all pressures, $p¯i=10 kPa ∀ i$, and $pi¯=200 kPa ∀ i$.
The four controllers in the pump control layer are updated every 0.25s with $Nhp=10$ steps. For all pumps, $u¯ip=20%∀ i,$ and $u¯ip=65% ∀ i$.
Figure 17(a) shows the pump commands chosen by the hierarchical controller in the experiment. Figure 17(b) shows the resulting cold plate wall temperatures, which exhibit significantly improved
constraint management as compared to the centralized benchmark controller. This improvement is largely due to the inclusion of the thermal subsystem control layer, which leverages a faster update
rate to compensate for thermal model error and reject much of the unknown noise added to the nominal heat load profile in tracking references from above in the hierarchy.
Controller Comparison Summary.
Figure 18(a) shows the total temperature constraint violations under each controller, computed by integrating the magnitude of constraint violations over time for each cold plate. The integral of
violations in CP 1 is small enough for all controllers to not be visible in this figure. Figure 18(b) shows the peak temperature violation across the mission in each cold plate under each controller.
From Figs. 18(a) and 18(b), it is clear that the hierarchical framework outperforms both of the baseline controllers, having 7.7% of the total violations of the centralized baseline controller, 21%
of the total violations of the decentralized PI baseline controller, and greatly reduced peak violations in CP 2 and CP 3.
Figure 19 shows the total energy consumed by the pumps under each controller, calculated as a function of the electrical current measured for each pump during the experiments. Although the objective
of minimizing mass flow rates is given a relatively small weight in the MPC formulations, the centralized and hierarchical approaches still require less total pump energy than the decentralized PI
approach. Most importantly, Fig. 19 shows that differences in thermal performance between the three control approaches are not due to increasing the overall actuator effort of the system but instead
due to a better allocation of actuation.
This paper applies a graph-based modeling approach and hierarchical control framework for thermal management. Graph-based dynamic models are derived from conservation of mass and thermal energy,
where vertices represent storage elements and edges capture the transport of mass and energy. Experimental validation with a testbed fluid-thermal system demonstrates the high accuracy of the
modeling approach. The graph-based framework facilitates model-based control design and is especially well suited to hierarchical control designs, where the control structure should reflect the
structure inherent in the graph framework. A scalable hierarchical framework is proposed to manage the multidomain and multi-timescale dynamics present in the system. The proposed hierarchical
controller is experimentally demonstrated on a testbed system and compared to decentralized and centralized benchmark controllers, where it is found to perform significantly better in managing
thermal objectives by compensating for model error and rejecting unknown disturbances.
In cases where a centralized controller can be solved using a complete system model at a fast update rate and long-time horizon, implementation of hierarchical control is likely not warranted.
However, beyond a certain level of complexity executing such a centralized controller becomes intractable given the limited computational resources on board vehicle systems. Therefore, hierarchical
control represents an enabling technology for achieving high performance in highly complex thermal management systems, where centralized control is not sufficiently scalable and decentralized control
can result in poor performance or necessitate overconservative designs due to an inability to manage coupling between subsystems.
Future work will extend the model-based control design to better capture nonlinear regimes in the system dynamics by performing a local linearization at each controller update and/or representing the
system as a set of switched linear models with modes specific to operating regions. Formal procedures will also be explored for choosing the number of layers of the hierarchy, the partitioning of
dynamics within each layer, and the update rate of each layer. Future work will also include control of other energy domains present in vehicle energy systems, using the graph-based models for
electrical and turbomachinery components presented in Ref. [32]. Lastly, analytical techniques for ensuring stability will be incorporated into the hierarchical control demonstration, such as the
passivity approach for graph-based models in Ref. [33], which has been extended to switched graph-based models in Ref. [34].
Funding Data
• National Science Foundation Graduate Research Fellowship (Grant No. DGE-1144245).
• National Science Foundation Engineering Research Center for Power Optimization of Electro-Thermal Systems (POETS) with cooperative agreement EEC-1449548.
• Air Force Research Laboratory (Contract No. FA8650-14-C-2517).
Appendix A: Experimental Testbed Overview
This experimental testbed was developed to emulate features of fluid-based thermal management systems while being rapidly reconfigurable to allow for numerous configurations. Table 4 and Fig. 20
contain specifications and images of the components and sensors currently included in the testbed. The working fluid is an equal parts mixture of propylene glycol and water. Components are connected
via flexible tubing.
Table 4
Component Specifications Number supported
(a) Pump Swiftech MCP35X 8
12 VDC, 1.5 A max, PWM-controlled
4.4 m max head
17.5LPM max flow
SparkFun ACS712 low current sensor
(b) Brazed-plate HX Koolance HXP-193 4
12 plates
4.0kW at 5LPM and 20°C inlet temperature difference
(c) Cold plate HX Wakefield-Vette 6-pass, 6″ exposed cold plate 4
Vishay LPS1100H47R0JB thick film resistors, 47 $Ω$, 1100 W max power each
Crydom 10PCV2415 solid-state relay
Echun Electronic Co. ECS1030-L72 noninvasive current sensor
(d) Pipe Koolance HOS-13CL —
Clear PVC
(e) Reservoir Koolance 80×240mm 4
8″eTape liquid-level sensor
(f) Chiller Polyscience 6000 Series 1
Up to 2900 W at 20°C
−10°C to +70°C
(g) Temp. sensor Koolance SEN-AP008B (fluid) 16
Koolance SEN-AP007P (surface)
10 KΩ thermistor
(h) Pressure sensor Measurement Specialties US300 7
Up to 310kPa gauge
(i) Flow rate sensor Aqua computer high flow 8
Component Specifications Number supported
(a) Pump Swiftech MCP35X 8
12 VDC, 1.5 A max, PWM-controlled
4.4 m max head
17.5LPM max flow
SparkFun ACS712 low current sensor
(b) Brazed-plate HX Koolance HXP-193 4
12 plates
4.0kW at 5LPM and 20°C inlet temperature difference
(c) Cold plate HX Wakefield-Vette 6-pass, 6″ exposed cold plate 4
Vishay LPS1100H47R0JB thick film resistors, 47 $Ω$, 1100 W max power each
Crydom 10PCV2415 solid-state relay
Echun Electronic Co. ECS1030-L72 noninvasive current sensor
(d) Pipe Koolance HOS-13CL —
Clear PVC
(e) Reservoir Koolance 80×240mm 4
8″eTape liquid-level sensor
(f) Chiller Polyscience 6000 Series 1
Up to 2900 W at 20°C
−10°C to +70°C
(g) Temp. sensor Koolance SEN-AP008B (fluid) 16
Koolance SEN-AP007P (surface)
10 KΩ thermistor
(h) Pressure sensor Measurement Specialties US300 7
Up to 310kPa gauge
(i) Flow rate sensor Aqua computer high flow 8
Centrifugal pumps are the primary fluid movers in the system. Speed is controlled via a PWM % duty cycle with less than 20% corresponding to a constant 1300rpm, 65% and above corresponding to
4500rpm, and a linear trend between. Peak power consumption of the pumps is 20 W with a peak efficiency of 35%.
Liquid-to-liquid brazed-plate HXs transfer thermal energy between fluid loops in either a parallel-flow or counter-flow configuration.
Each CP heat exchanger consists of several 47 $Ω$ resistive heaters mounted to an aluminum cold plate that has copper tubing passing through. The heaters on each cold plate are wired to a solid-state
relay actuating the heater power output. Up to four heaters can be mounted on each cold plate; however, in this paper just two are used, allowing a maximum heat load of 1.7kW to be applied to each
cold plate.
The reservoirs act as thermal storage elements. A liquid-level sensor inside each reservoir is used to calculate its liquid mass and therefore its thermal capacitance.
A 1.5 HP (1.12 kW) industrial chiller acts as a thermal energy sink (e.g., a vapor compression system). With variable temperature control from –10°C to 70°C, the chiller can emulate a wide range of
sink conditions.
Infrared cameras were used to identify locations on the HX and CP walls that closely represent the average wall temperature, at which surface temperature sensors are affixed. The infrared image in
Fig. 21 shows CP 1 and reservoir 1 of the example testbed configuration in Fig. 4. The cable for the CP 1 wall temperature sensor leads from the center of the plate across its left side.
Sensors and actuators are connected to a National Instruments CompactDAQ, exchanging sensor measurements and actuator commands with National Instruments LabVIEW software on a desktop computer at a
rate of 10Hz. From within LabVIEW, signals can be exchanged with matlab/simulink either by running the two programs simultaneously and communicating via the user datagram protocol or by embedding
matlab code in LabVIEW using a matlab script node.
Appendix B: Hydraulic Coupling Constraints
To determine the constraints on mass flow rate for closed-loop control, the nonlinear hydraulic model is simulated to steady-state at all combinations of pump speeds in the range of 20–65% PWM in
increments of 0.25%. As a safety margin against complete flow reversal, any input combinations resulting in a mass flow rate less than 0.03kg/s are excluded from the allowable hydraulic operating
conditions. The resulting envelope of mass flow rates through pumps 3 and 4 is shown in Fig. 22(a).
Figure 22(b) shows the envelope of pump commands generating mass flow rates in the envelope of Fig. 22(a). This demonstrates that coupling between the two pump commands must be taken into account to
avoid reverse flow.
The envelope in Fig.
is assumed to be a polyhedron, defined by the linear inequality
where H[pri] is a matrix of appropriate dimensions and z[pri] is a vector. Vertices used to define this polyhedron are circled in Fig. 22(a).
The envelope of achievable mass flow rates in the secondary loop is defined similarly by
$E sec ={m˙ sec |H sec m˙ sec ≤z sec }$
, and
R. W.
, “
Efficient Propulsion, Power, and Thermal Management Integration
Paper No. 2013-3681.
, “
A Hierarchical Control Strategy for Aircraft Thermal Systems
M.S. thesis
, University of Illinois at Urbana-Champaign, Champaign, IL.
, and
, “
Thermal Analysis of an Integrated Aircraft Model
Paper No. 2010-288.
, and
T. S.
, “
Dynamic Thermal Management for Aerospace Technology: Review and Outlook
J. Thermophys. Heat Transfer
), pp.
K. L.
J. D.
D. L.
, and
, “
Steady-Periodic Acceleration Effects on the Performance of a Loop Heat Pipe
J. Thermophys. Heat Transfer
), pp.
S. K.
, and
K. L.
, “
Quasi-Steady-State Performance of a Heat Pipe Subjected to Transient Acceleration Loadings
J. Thermophys. Heat Transfer
), pp.
, and
, “
Robust Optimization of an Aircraft Power Thermal Management System
Paper No. 2010-7086.
D. B.
, “
Fuel Flow Control for Extending Aircraft Thermal Endurance—Part I: Underlying Principles
Paper No. AIAA 2016-1621.
D. B.
, “
Rapid Mission Planning for Aircraft Thermal Management
Paper No. AIAA 2015-1076.
D. B.
, “
Fuel Flow Control for Extending Aircraft Thermal Endurance—Part II: Closed Loop Control Closed Loop Control of Dual Tank Systems
Paper No. AIAA 2016-1622.
T. O.
J. E.
A. G.
, and
T. S.
, “
A Model Predictive Framework for Thermal Management of Aircraft
Paper No. DSCC2015-9771.
J. E.
T. O.
A. G.
, and
T. S.
, “
Hardware-in-the-Loop Validation of Advanced Fuel Thermal Management Control
46th AIAA Thermophysics Conference
, Washington, DC, June 13–17, pp.
, and
, “
Graph-Based Hierarchical Control of Thermal-Fluid Power Flow Systems
American Control Conference
), Seattle, WA, May 24–26, pp.
K. L.
T. L.
, and
, “
Dynamic Consensus Networks With Application to the Analysis of Building Thermal Processes
IFAC proc.
(1), pp.
H. A.
, “
A Graph-Theory-Based Approach to the Analysis of Large-Scale Plants
Comput. Chem. Eng.
), pp.
J. P.
M. A.
, and
A. G.
, “
Hierarchical Control of Multi-Domain Power Flow in Mobile Systems—Part I: Framework Development and Demonstration
Paper No. DSCC2015-9908.
T. E.
, and
, “
Refrigerant Charge Management and Control for Next-Generation Aircraft Vapor Compression Systems
Paper No. 2013-01-2241.
T. O.
, “
Optimal Energy Use in Mobile Applications With Storage
Ph.D. dissertation
, University of Illinois at Urbana-Champaign, Champaign, IL.
S. K.
D. A.
, and
, “
A Control System Test Bed for Demonstration of Distributed Computational Intelligence Applied to Reconfiguring Heterogeneous Systems
IEEE Instrum. Meas. Mag.
(1), pp. 30–37.
, and
, “
Hierarchical Model Predictive Control
46th IEEE Conference on Decision and Control
, New Orleans, LA, Dec. 12–14, pp.
, “
Architectures for Distributed and Hierarchical Model Predictive Control—A Review
J. Process Control
), pp.
R. C.
J. E. R.
De Queiroz
M. H.
D. M.
, and
, “
Multi-Level Hierarchical Interface-Based Supervisory Control
), pp.
, and
, “
Hierarchical Model Predictive Control for Plug-and-Play Resource Distribution
Distributed Decision Making and Control
, pp.
J. D.
, and
, “
Robust Aggregator Design for Industrial Thermal Energy Storages in Smart Grid
IEEE Trans. Smart Grid
), pp.
J. P.
M. A.
H. C.
, and
A. G.
, “
Experimental Validation of Graph-Based Modeling for Thermal Fluid Power Flow Systems
Paper No. DSCC2016-9782.
D. B.
Introduction to Graph Theory
2nd ed.
R. A.
, and
S. M.
, “
Vehicle Level Tip-to-Tail Modeling of an Aircraft
Int. J. Thermodyn.
), pp.
, “
YALMIP: A Toolbox for Modeling and Optimization in MATLAB
International Symposium on Computer Aided Control Systems Design
, New Orleans, LA, Sept. 2–4, pp.
Gurobi Optimization
Gurobi Optimizer Reference Manual
Gurobi Optimization
, Houston, TX.
, and
T. L.
, “
On the Implementation of an Interior-Point Filter Line-Search Algorithm for Large-Scale Nonlinear Programming
Math. Programming
), pp.
M. A.
J. P.
H. C.
, and
A. G.
, “
Dynamical Graph Models of Aircraft Electrical, Thermal, and Turbomachinery Components
ASME J. Dyn. Syst., Meas., Control
), p. 041013.
J. P.
, and
A. G.
, “
Stability of Decentralized Model Predictive Control of Graph-Based Power Flow Systems Via Passivity
, pp.
H. C.
J. P.
, and
A. G.
, “
Passivity and Decentralized MPC of Switched Graph-Based Power Flow Systems
,” American Control Conference, Milwaukee, WI, June 27–29 (in press).
|
{"url":"https://asmedigitalcollection.asme.org/dynamicsystems/article/140/10/101016/367281/Experimental-Validation-of-Graph-Based","timestamp":"2024-11-13T16:19:37Z","content_type":"text/html","content_length":"496351","record_id":"<urn:uuid:38f13800-a887-4dc6-b122-640c4aa291a9>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00673.warc.gz"}
|
Subtracting One-Digit Numbers Using a Related Addition Sentence to Solve Word Problems
Question Video: Subtracting One-Digit Numbers Using a Related Addition Sentence to Solve Word Problems Mathematics
Chloe and Daniel made 6 cupcakes. After their dog ate some, there were 2 left. They want to know how many cupcakes their dog ate. Who is correct? Chloe says: We can solve 2 + _ = 6 Daniel says: We
can solve 6 − 2 = _
Video Transcript
Chloe and Daniel made six cupcakes. After their dog ate some, there were two left. They want to know how many cupcakes their dog ate. Who is correct? Chloe says: We can solve two plus what equals
six. Daniel says: We can solve six take away two equals what.
We know that Chloe and Daniel had six cupcakes to begin with. We can model this using six counters in a 10 frame. We also know that after their dog ate some, there were two cupcakes left, so we know
that one of the parts in our part–whole model is the number two. They had six cupcakes. Their dog ate some. And they have two left. Chloe says we can solve two plus what equals six. If we know that
two plus four makes six, Chloe must be correct. And if we look at our 10 frame, we can see that there are two counters and four counters have been crossed out, but there are six counters altogether.
Two and four are a pair of numbers which go together to make six. So Chloe is correct.
Daniel said we could solve six take away two. He’s also correct. If two plus four equals six and six take away two equals four, addition and subtraction are related. So if Chloe and Daniel made six
cupcakes, their dog ate some, and they had two left, they can work out how many cupcakes their dog ate by either solving two plus four equals six or six take away two equals four. They’re both
|
{"url":"https://www.nagwa.com/en/videos/749102457020/","timestamp":"2024-11-06T10:48:39Z","content_type":"text/html","content_length":"242469","record_id":"<urn:uuid:00403d4e-9b71-4edd-8088-3ed8039261c3>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00710.warc.gz"}
|
want to compute zero mean for whole matrix and then some scaling to...
I have a matrix of 1024*966 for this matrix I want to compute zero mean and want to scale after this and want to improve result by doing this but how can I? Final result will be nearly equal result
to matrix of 1024*966.
for zero mean I got
For scaling I don't know?
1 Comment
1 view (last 30 days)
want to compute zero mean for whole matrix and then some scaling to this ?
I'm sorry, but I don't think the explanation is what you want is very clear. Can you give a very small example (maybe 3x3?) that shows what you want to do?
Answers (1)
Edited: Image Analyst on 29 Aug 2013
You should have edited your original question, or added a comment to my answer there.
Why do you call it zero mean? That's not standard terminology. Why are you taking only the left column of 2D image AC1? That doesn't do the whole matrix, just the left column. Anyway, you can do
Y = double(AC1) - mean2(AC1); % Use mean(AC1(:)) if you don't have the IPT.
For scaling just multiply by some scaling factor, like 42 or whatever - I'm not sure why that part is confusing to you.
6 Comments
Image Analyst on 18 Sep 2013
I still don't understand. Did you switch terminology on us? Is your new x the old z (the plane number, or 1,2,3) and the new y is the pixel value at that (x,y) location in the three matrices?
Since you've lost me, why don't you just use polyfit() to get your parabola?
coefficients = polyfit(x, y, 2);
|
{"url":"https://se.mathworks.com/matlabcentral/answers/85884-want-to-compute-zero-mean-for-whole-matrix-and-then-some-scaling-to-this?s_tid=prof_contriblnk","timestamp":"2024-11-04T23:23:49Z","content_type":"text/html","content_length":"150846","record_id":"<urn:uuid:752a292d-3c14-483d-b134-6be803ccfe00>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00405.warc.gz"}
|
Unhappy Research
If you've been surfing the web lately, chances are you've come across some version of
this news story
about a research study that shows that New York is the unhappiest state in the country while Louisiana is the happiest. A finding that is,
prima facie
, ridiculous.
Before you start moving your family from Manhattan to New Orleans it's worth considering what's wrong with the story - which strikes me as being the perfect combination of dubious research wedded to
journalistic misinterpretation.
As I understand it, Oswald and Wu basically construct a subjective measure of happiness by state by taking survey results on people's stated level of satisfaction and running a regression predicting
these satisfaction levels as a function of a range of individual level attributes (such as income, education, employment category, etc.) plus dummies for each state (except Alabama - the omitted
category). And therein lies the misinterpretation: the subjective coefficients they report are not telling us how happy people in each state are, they are telling us what the net effect of the state
is after all other individual level factors are controlled for. In other words, the negative coefficient of New York means that a person with exactly the same income, education , employment, etc.
would be less satisfied in New York than in Alabama.
Now, this would make sense if individual attributes that contributed to happiness were uncorrelated with state of residence, but this is clearly not the case. If states differ substantially in the
average levels of happiness-causing attributes (i.e. if people in New York are likely to have higher levels of education, higher income, etc.) then the coefficients for the state dummies by
themselves are not meaningful; in particular, we are likely to see a negative bias in the coefficients of states with high levels of positive attributes. What's more, this bias is going to be
considerably amplified if the dependent variable of happiness / satisfaction is right-censored, that is to say if the measure of satisfaction used does not adequately capture differences in
satisfaction levels at the higher end of the range (which, btw, is the case with the data used in the study - on a 1 to 4 scale the average score is 3.4).
To see this in (exceedingly) simple terms, imagine that we have only two people from two states - Louisiana (L) and New York (N); that we have only one other explanatory variable - Income (I); and
that the satisfaction score for both people, on a 1 to 4 scale, is 4, i.e. they both claim to be 'Very Satisfied'. The regression would then try to solve
4=B1.Il + Bl
4=B1.In + Bn
where B1 is the coefficient for Income, Bl and Bn are the satisfaction coefficients for the states, and Il and In are the income levels of the person in Louisiana and the person in New York. Now,
imagine that the person in New York has twice the income of the person in Louisiana. We then have
4=B1.Il + Bl = B1.In + Bn = B1.2Il + Bn
Now, if B1.Il + Bl = B1.2Il + Bn, and assuming B1>0 (more income means greater happiness), this would mean that Bl>Bn, i.e. the satisfaction coefficient of Louisiana is greater than the satisfaction
coefficient of New York. Notice that this doesn't really mean anything about living in New York, it's simply an artifact of the fact that satisfaction measures top out at 4 and that New York has
twice the income levels of Louisiana.
On the whole then, it's unclear that the coefficients of the state dummies actually mean anything. But even in the best case, all they mean is that moving from New York to Louisiana will increase
your satisfaction, provided you can find the identical job and continue to make the same amount of money. Good luck with that.
Finally, let's think for a moment about the researcher's claim that their study shows a surprisingly strong correlation between subjective and objective measures of satisfaction. Again, let's think
about what the subjective state coefficient really is. It's the average difference between the satisfaction of a person with a certain level of income (uncorrected for cost of living), education,
etc. living in the focal state (New York) vs. a person with the same level of income, education, etc. living in Alabama. Now what might cause a person making the same dollar amount to be less
satisfied in New York than in Alabama? Obviously, cost of living. And what is a major component of the 'objective' measure the study uses to rank states? Why, it's cost of living. Is it really
surprising then that the two measures turn out to be highly correlated? I don't think so.
What would be interesting, of course, would be to see a version of the study that a) controlled for the location choices of individuals through some kind of simultaneous equation model and b)
included income levels adjusted for cost of living in the regression equation to predict satisfaction levels. Then we might actually learn something.
Ironically, this is one instance where a naive application of the satisfaction scores - a simple table of the mean satisfaction scores by state - may actually be more accurate and representative than
the subjective coefficients calculated by the authors. I'm not sure how the mean satisfaction score for New York compares to the mean satisfaction score for Louisiana, but I'd be amazed if New York
scored lower than Louisiana, let alone if New York was the lowest of all states. Now that would be surprising.
2 comments:
Ana said...
I will need to read the actual study in order to comment properly on the results, but from its presentation and abstract it seems to me that it entails that cost of living is not the only factor
to happiness. For example: 9 out of the top 10 states are in the South – is it a mere coincidence or a result of the correlation between exposure to sun and levels of depression?
The only issue I have: they consider it an objective measurement of happiness, like in 100% objective, but if happiness is conditioned by subjective factors such as our ability to deal with
stress ….?
Falstaff said...
Ana: It's true that other factors go into the objective measure, climate being one of them. But in a sense, they're all have the same problem as cost of living. The whole point of compensating
differentials is that people are paid more to live in places with higher cost of living, less pleasant climate, etc. If you take the average error for a location from a regression on uncorrected
income (which is all the subjective coefficient is) and correlate it with the factors that cause incomes to be higher in that location, of course you're going to find a high correlation. All that
proves is that the objective measures are valid, in that they do really explain income differences.
|
{"url":"https://2x3x7.blogspot.com/2009/12/unhappy-research.html?showComment=1261675838514","timestamp":"2024-11-11T08:26:18Z","content_type":"text/html","content_length":"68361","record_id":"<urn:uuid:1da510a2-d413-477d-9acd-90b2af268ff2>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00569.warc.gz"}
|
Classical Least Squares (CLS), Part 3: Expanding the Analysis to Include Concentration Information on Principal Component Regression (PCR) and Partial Least Squares (PLS) Algorithms
From the spectra alone, we have been able to deduce some interesting things about the behavior of data. We can now extend our data analysis to apply algorithms that include the constituent
information as well. This article examines the behavior of data when subjected to the popular principal component regression (PCR) and partial least square (PLS) algorithms. PCR is essentially the
same as principal component analysis (PCA), but it takes the results a little further by relating them to known information about the composition or other characteristics of the material being
This is our first exploration into evaluating the effect of using an advanced calibration algorithm on calibration performance. We also look at the effects of different algorithms, different numbers
of factors for both principal component regression (PCR) and partial least squares (PLS) algorithms, and different ways to look at the results. Although we do not eschew use of the simple numerical
measures commonly used to evaluate calibration performance, we consider the way the common diagnostic statistics change under the different conditions to be of much greater importance. Therefore, the
bulk of our article here consists of graphical displays that provide much more information that the raw numbers alone.
Figures 1, 2, 3, and 4 show the results of applying the PCR and PLS calibration algorithms to the same spectral data. For each algorithm, volume fractions and weight fractions served as the
constituent values. The results are summarized as the residual variance. Table I presents the organization of these four figures.
As we reported in our previous column on this topic (1), the use of five chemical components to construct the sample set means that theoretically there should be only four degrees of freedom in the
data set. Despite that, we discovered that some of the analytes required the use of up to six PCR or PLS factors. However, artifacts revealed in the results already presented (such as the patterns of
residuals seen in the data plots) indicate that there is more to the story. To ensure that we did not miss any important effects, we continued the calculations for both PCR and PLS analysis to
include all numbers of factors up to ten factors, for each calibration, and we report all of those results.
Effect on PCR Analysis
Figure 1 shows the effect on the residual variance of volume fractions. The residual variance is the basic measure of the error of the constituent value, from which statistics such as the standard
error of estimate (SEE) and standard error of calibration (SEC) are calculated. Principal component analysis (PCA) is used to generate the model and the constituent is presented as volume fraction.
Figure 1 shows the residual variance for each ingredient at the different stages, where they have various numbers of principal components included in the calibration.
Figure 1: Residual variance from PCR calibrations for varying numbers of principal components included in the calibration calculations and using volume fractions as the constituent concentrations.
Important features of the behavior of the data are marked. See text for full explanation.
We note in Figure 1 that changes in calibration performance depend on the number of principal components used. Some of the ingredients (such as methanol and methylene dichloride) show large
improvements in the calibration performance for early principal components (PCs), whereas for others (such as butanol and dichloropropane) no appreciable improvement in calibration performance was
noted until three or more PCs have been included in the calibration model. This observation is consistent with the behavior we observed in the previous installment, where the variance of the spectra
also required more than the theoretical number of PCs (four). We also note in Figure 1 that for these latter two ingredients, at least one early PC makes no contribution to improvement of the
performance of the calibration model, as marked in the figure.
The calibration performance of these various ingredients is complicated to describe. Acetone and dichloropropane are two ingredients that required more than four PCs to produce a calibration where
the residual error approached the noise level. Those two ingredients are also the ones for which one or more early PCs make very little contribution to their calibration performance. Dichloropropane
showed some improvement when the second PC was included in the calibration model, but the third PC did little to improve it further. For methylene dichloride, neither of the first two PCs appreciably
improved the calibration performance. For both of those ingredients it required more than four PCs to improve the performance values to be comparable to the other ingredients. It thus appears that
each ingredient individually requires four PCs to achieve an optimum calibration. However, depending on the ingredient, these four PCs are not necessarily the first four PCs that are computed from
the spectral variations. This finding explains that the previous observation (1) required six PCs to account for the total spectral variability in the data. It is here that we see the underlying
cause of that behavior: Some of the PCs do not explain the variance of some of the mixture constituents. Therefore, it requires more PCs than expected to account for all the variance caused by all
the ingredients, even if only a few of the ingredients require the higher principal components.
Both butanol and methylene dichloride had virtually all of their respective variances accounted for by the point where four PCs have been included in the calibration model. As we see, both of these
ingredients are indistinguishably close to the x‑axis by the point where four PCs are included in the model. Moreover, while dichloropropane required the full six PCs to reach zero residual variance,
acetone only required five despite the fact that the first two PCs made little contribution to the calibration performance for this ingredient; all the “heavy lifting” for this constituent was done
by PCs 4 and 5. For butanol, the major contribution to calibration performance is attributed to PCs 3 and 4.
Now let’s compare the results from calibrating for weight fractions to what happens when the constituent concentrations are expressed as volume fractions. Figure 2 presents the application of the PCR
algorithm on the dataset we have been exploring, using the weight fraction for the “concentration.” This is exactly analogous to our previous discussion except using weight fractions instead of
volume fractions.
Figure 2: Residual variance from PCR calibrations for weight percent. This figure shows varying numbers of principal components included in the calibration calculations and uses weight fractions as
the constituent concentrations. Analytes that require more than four PCs are the ones where the lower PCs had no effect. In all cases, four PCs explain the variance in the spectra, but not
necessarily the first four PCs.
Comparing the effects on performance in Figure 1 with the corresponding ones in Figure 2, we see that the same effects are operative. In Figure 1, however, the way they affect the
performance‑versus‑number of PCs agreement in a clearer way than in Figure 2. On the other hand, there is the same tendency for only two of the PCs to account for most of the reduction in error, for
acetone, methanol and butanol, as there was when the analyte concentration was expressed as volume fraction in Figure 1.
We noted above that for acetone and butanol the value of the residual variance became indistinguishable from the x‑axis when the first four PCs are included in the model. When weight fractions are
used for the constituent concentrations, however, there is still some variance not accounted for at that point, as evidenced by the fact that for both constituents, there is a noticeable space
between the graph point corresponding to four PCs and the x‑axis, when the units of the constituent are weight fractions.
This is a fine point, but it raises an important question: Why is the residual variance for four PCs smaller when volume fractions are used? We attribute that behavior to the fact that the first four
PCs are in fact enough to account for the real spectral variability of the data when volume fractions are used, and this is consistent with our previous findings. When weight fractions are used to
describe constituent concentrations, however, there is additional variance due to the nonlinear relationship between this component “concentration” value and the spectral data. This excess variance
is not present when volume fractions are used; therefore, there is no excess variance to show up in the volume–fraction plot.
The same PCs are computed in both cases (because the computation of PCs do not involve the constituent values), and because PCs are orthogonal, those that improve the calibration performance will
make the same contribution to that improvement regardless of the presence or absence of other PCs. Thus, equivalent calibration performance for any given ingredient should be obtainable by including
only those PCs in the calibration model that contribute to the improvement of the performance for that ingredient, answering the question posed above (“why is the residual variance for four PCs
smaller when volume fractions are used?”). The answer to that question is based on the same considerations we concluded previously: The excess PCs are attempting to compensate for the nonlinear
behavior of the relationship between the weight fraction and the spectroscopic measurement. Thus we see that it is not necessary, and indeed it is to be likely counterproductive, to include excess
numbers of PCs in a calibration model (conventionally called “overfitting”) solely because the software being used requires that all PCs from the first to the nth be included in the model. If nothing
else, it will add unnecessary noise to the results of the computation.
Effect on PLS Analysis
Figures 3 and 4 present graphs corresponding to Figures 1 and 2 but show the results of performing PLS analysis on the data instead of PCR analysis. Not surprisingly, the graphs are very different
from the ones in Figures 1 and 2. This results from the nature of the two algorithms and the differences between them. The extraction of the spectral variance by PCR is performed independently of the
constituent values, and each PCR loading indicates only the actual changes that result from the spectra of the constituents and their respective concentrations. As we noted above, the same PCR
loading is therefore calculated regardless of the constituent under consideration for the calibration. In the case of PLS, however, the calculation of each loading includes a contribution from the
constituent for which the calibration is being performed, and each loading differs from the corresponding loadings for the other constituents. The result is that each PLS loading is optimized for the
constituent it was generated to calibrate for, and therefore includes the maximum amount of variance from that constituent. It is nearly impossible, therefore, for any PLS loading to not include some
of the remaining variance in either the constituent or spectral values; therefore, there cannot be any PLS loadings which do not remove variance from the data as, for example, the second and third PC
did for acetone in Figure 1, or the third PC did for dichloropropane, in Figure 1.
The mathematical properties of the PLS analysis thus overwhelms the inherent properties of the spectral data itself while some small vestiges remain. Figure 3 notes the shape and rate of descent of
the plotted values of the residual variance of acetone, and compares that with the corresponding plotted values of dichloropropane in Figure 4. In both cases, the specified values of residual
variance are the largest in their respective plots; furthermore, the shapes of these two plots are very similar. Similar effects can also be seen for all the other constituents as well.
Figure 3: Residual variance from PLS calibrations for varying numbers of principal components included in the calibration calculations, and using volume fractions for the constituent concentrations.
Figure 4: Residual variance from PLS calibrations, for varying numbers of principal components included in the calibration calculations, and using weight fractions for the constituent concentrations.
Although we cannot include all the graphs we created in this column, we would be remiss if we did not include at least a minimum number of examples of the utility of examining residual plots. Figure
5 presents two residual plots for dichloropropane. These plots are both results obtained from performing PCR analysis of the data, and plotting weight fractions and volume fractions, respectively,
against the PCR predicted values. The curvature of the weight fraction plot is observable while the corresponding plots using volume fractions can be seen to be free from any noticeable nonlinearity.
Figure 5: PCR predictions for dichloropropane using five principal components. (a) Residuals for weight fractions versus predicted values and (b) residuals for volume fractions versus predicted
Direct plotting of the weight fraction and volume fraction plots against the PCR predicted values nearly erases the distinctions between the different cases; to the unaided eye, they all look like
straight lines. It is possible to aid the eye by overlaying an actual straight line onto the graph. Then the difference between the weight fraction and volume fraction graphs can be observed, as can
be seen from the plots on the CD, although we cannot include them here.
Our previous result (1) showed that the data do not conform to the theoretical expectations for the number of PCs needed to account for all of the spectral variance. This was a mystery until we saw
the results reported here. Now we see that, because the PCs are computed solely from the spectral data, the use of different units for the constituent values cannot play a direct role in the need for
those “extra” principal components that are computed. But figures 1 and 2 give us a clue as to the reasons why that happens nonetheless. In the absence of any prior information, we might expect that
each PC would account for a more‑or‑less equal amount of variance from the spectrum of each constituent. Figures 1 and 2, however, show that is not the case. The first PC, for example, accounts for
almost none of the acetone or butanol variance. The second PC accounts for almost none of the acetone or dichloropropane variance. We can see in Figure 1 that in several places, variance resulting
from one or another particular PC is unaffected; several of these places are marked on the figure.
In the end, however, each of the constituents individually requires that four PCs contribute to the accommodation of the variance of that constituent and are therefore needed to remove all the
variance from the data. Given, however, that for some constituents the early PCs do not reduce the variance, more than four total PCs are needed to explain the total variance of the data set. The
conclusion previously ascribed (2) solely to the nonlinear relationship between spectroscopy and “concentration” expressed in incorrect units is now additionally seen to have another cause as well:
It is a consequence of the direct relationship between the physical property (density) and mathematical properties (order of calculating PCs) of the spectroscopic data measured on samples affected by
that physical property, and thus is only part of the explanation for the need for “extra” factors.
This finding is probably the most unexpected and surprising outcome from the analysis of this dataset. The finding that using the correct units for expressing concentration improves the analytical
results and also makes the models more robust and reproducible is not nearly as dramatic, nor as much fun to use, as the application of increasingly complicated and sophisticated multisyllable
chemometric methods. Nevertheless, being arrived at, and connected by a pure physical and chemical basis, it can improve the accuracy as well as our confidence in the results we obtain. Furthermore,
contrary to our statement at the beginning of this subseries (1) about the state of near–infrared (NIR) spectroscopy analysis, we have now connected this field of endeavor to the rest of the universe
of physical and chemical phenomena. There is still much to learn about the details, but now we can go in the right direction without being misled by the chemometric algorithms whose inner workings
are impossible to understand.
Other Conclusions
Our study also yielded several other important conclusions that are worth highlighting. For example, even though the number of PCs needed to explain the variance introduced by the mixtures in the
data set equals theoretically expected value, these are not necessarily the first n factors.
We also concluded the following for several chemical compounds: For the methanol results, there was a curvature seen on calibrations for weight%. We observed that it was masked by random variations
at a low number of factors, and it was visible at a medium number of factors. The calibrations were reduced at a high number of factors and not observed on calibrations for volume%.
Meanwhile, for the dichloropropane results, we observed that the weight% exhibits a curvature at a low and high number of factors, and that the volume fraction shows no curvature at a low number of
factors while reversing the curvature at a high number of factors, which “overcorrects” the non-linearity.
The PCR and PLS calibrations need the number (n) of factors expected from mathematical considerations. A possible explanation for the results is that the calibration algorithms reduce the random
error contribution to the SEC but leaves the systematic error (such as the nonlinearity) relatively unaffected.
There were three regimes for operative changes to performance statistics: 1) Calibration corrects for actual changes in sample composition, 2) calibration accommodates itself to random noise, and 3)
calibration accommodates itself to the nonlinearity of data.
Whether the effects 2 or 3 dominates depends on the data, and can change during the course of the calibration.
In addition to the above, we present here an expanded version of Table II from (1) listing the known sources of differences between the reference lab and predicted values.
1. H. Mark, J. Workman, Spectroscopy 34(6), 16–24 (2019).
2. H. Mark, R. Rubinovitz, D. Heaps, P. Gemperline, D. Dahm, and K. Dahm, Appl. Spect. 64(9), 995–1006 (2010).
Jerome Workman, Jr.serves on the Editorial Advisory Board of Spectroscopy and is the Senior Technical Editor for LCGC and Spectroscopy. He is also a Certified Core Adjunct Professor at U.S. National
University in La Jolla, California. He was formerly the Executive Vice President of Research and Engineering for Unity Scientific and Process Sensors Corporation.
Howard Mark serves on the Editorial Advisory Board of Spectroscopy, and runs a consulting service, Mark Electronics, in Suffern, New York. Direct correspondence to: SpectroscopyEdit@mmhgroup.com
|
{"url":"https://www.spectroscopyonline.com/view/classical-least-squares-cls-part-3-expanding-the-analysis-to-include-concentration-information-on-principal-component-regression-pcr-and-partial-least-squares-pls-algorithms","timestamp":"2024-11-07T05:49:06Z","content_type":"text/html","content_length":"450068","record_id":"<urn:uuid:a4ebdf4b-678f-49a1-97e7-cb84b0a591d8>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00640.warc.gz"}
|
FLATTEN Function: Definition, Formula Examples and Usage
FLATTEN Function
Are you a Google Sheets user looking for a way to simplify your data and make it easier to work with? If so, you’ll want to get familiar with the FLATTEN function!
The FLATTEN function is a powerful tool that allows you to transform arrays of data into a single row or column. This can be especially useful if you have data that is nested or has multiple
dimensions, as it allows you to flatten it out and make it easier to work with. Whether you’re a beginner or an advanced user, the FLATTEN function is a great addition to your Google Sheets toolkit!
Definition of FLATTEN Function
The FLATTEN function in Google Sheets is a function that converts an array of data with multiple dimensions into a single row or column. It takes an array as its input and returns a range of cells
that contains the flattened array. This can be useful for simplifying data and making it easier to work with, especially if you have data that is nested or has multiple dimensions. The FLATTEN
function is a powerful tool that can help you transform and organize your data in Google Sheets.
Syntax of FLATTEN Function
The syntax for the FLATTEN function in Google Sheets is as follows:
=FLATTEN(array, [flatten_by_column])
The “array” argument is required and is the range of cells that you want to flatten. The “flatten_by_column” argument is optional and specifies whether the array should be flattened by row (if set to
FALSE) or by column (if set to TRUE). If this argument is not included, the default value is FALSE and the array will be flattened by row.
Here is an example of the FLATTEN function in action:
This formula would flatten the array in the range A1:C3 by row, resulting in a range of cells with six rows and one column.
=FLATTEN(A1:C3, TRUE)
This formula would flatten the array in the range A1:C3 by column, resulting in a range of cells with one row and six columns.
Examples of FLATTEN Function
Here are three examples of how you might use the FLATTEN function in Google Sheets:
1. Flatten a two-dimensional array: Suppose you have a two-dimensional array in cells A1:C3 that looks like this:
A1 | B1 | C1
A2 | B2 | C2
A3 | B3 | C3
You can use the FLATTEN function to flatten this array by row, like this:
This would result in a range of cells with six rows and one column, like this:
2. Flatten a three-dimensional array: Suppose you have a three-dimensional array in cells A1:C9 that looks like this:
A1 | B1 | C1
A2 | B2 | C2
A3 | B3 | C3
A4 | B4 | C4
A5 | B5 | C5
A6 | B6 | C6
A7 | B7 | C7
A8 | B8 | C8
A9 | B9 | C9
You can use the FLATTEN function to flatten this array by row, like this:
This would result in a range of cells with 18 rows and one column, like this:
3. Flatten an array by column: Suppose you have an array in cells A1:C3 that looks like this:
A1 | B1 | C1
A2 | B2 | C2
A3 | B3 | C3
You can use the FLATTEN function to flatten this array by column, like this:
=FLATTEN(A1:C3, TRUE)
This would result in a range of cells with one row and nine columns, like this:
A1 | B1 | C1 | A2 | B2 | C2 | A3 | B3 | C3
I hope these examples give you a sense of how you can use the FLATTEN function in Google Sheets. Let me know if you have any other questions or would like further clarification.
Use Case of FLATTEN Function
Here are a few real-life examples of how you might use the FLATTEN function in Google Sheets:
1. Consolidating data from multiple sheets: Suppose you have a workbook with multiple sheets that contain data that you want to combine into a single sheet. You can use the FLATTEN function to
flatten the data from each sheet into a single row or column, and then use the QUERY function to combine the data into a single sheet.
2. Transforming data for analysis: Suppose you have data that is nested or has multiple dimensions, and you want to transform it into a format that is easier to analyze. The FLATTEN function can
help you do this by converting the data into a single row or column. You can then use other functions, such as SUM, AVERAGE, or MAX, to analyze the data.
3. Cleaning and formatting data: Suppose you have data that is not in a consistent format, and you want to clean and format it before using it in a chart or pivot table. The FLATTEN function can
help you do this by converting the data into a single row or column, which makes it easier to apply formatting and sorting.
I hope these examples give you some ideas for how you might use the FLATTEN function in Google Sheets. Let me know if you have any other questions or would like further clarification.
Limitations of FLATTEN Function
There are a few limitations to the FLATTEN function in Google Sheets that you should be aware of:
1. The FLATTEN function can only be used with arrays that have a maximum size of 50,000 cells. If your array is larger than this, the FLATTEN function will not work.
2. The FLATTEN function does not preserve the original structure of the array. Instead, it converts the array into a single row or column, which means that you may lose some of the context or
relationships between the cells.
3. The FLATTEN function does not work with cell references that are used as inputs. Instead, you must specify the actual range of cells that you want to flatten.
4. The FLATTEN function does not work with arrays that contain formulas or functions. Instead, you must first evaluate the formulas or functions and then use the FLATTEN function on the resulting
Commonly Used Functions Along With FLATTEN
Here are some commonly used functions that you might use with the FLATTEN function in Google Sheets:
1. QUERY: The QUERY function allows you to extract data from a range of cells based on specific criteria. You can use the FLATTEN function to convert an array into a single row or column, and then
use the QUERY function to extract the data that you need. For example:
=QUERY(FLATTEN(A1:C3), "SELECT * WHERE A = 'apple'")
This formula would flatten the array in cells A1:C3 and then use the QUERY function to select only the rows where column A contains the value “apple”.
2. SORT: The SORT function allows you to sort data in a range of cells based on specific criteria. You can use the FLATTEN function to convert an array into a single row or column, and then use the
SORT function to sort the data. For example:
=SORT(FLATTEN(A1:C3), 2, FALSE)
This formula would flatten the array in cells A1:C3 and then use the SORT function to sort the data by column B in descending order.
3. AVERAGE: The AVERAGE function allows you to calculate the average value of a range of cells. You can use the FLATTEN function to convert an array into a single row or column, and then use the
AVERAGE function to calculate the average value. For example:
This formula would flatten the array in cells A1:C3 and then use the AVERAGE function to calculate the average value of the cells.
In summary, the FLATTEN function is a powerful tool that allows you to transform arrays of data with multiple dimensions into a single row or column in Google Sheets. This can be useful for
simplifying data and making it easier to work with, especially if you have data that is nested or has multiple dimensions. The FLATTEN function is easy to use and can be a great addition to your
Google Sheets toolkit.
If you’re interested in using the FLATTEN function in your own Google Sheets, I encourage you to give it a try! Start by experimenting with simple arrays to get a sense of how the function works, and
then try using it with more complex data sets. With a little practice, you’ll be a pro at using the FLATTEN function to transform and organize your data in Google Sheets. So, don’t hesitate to give
it a try and see how it can help you work with your data more efficiently!
Video: FLATTEN Function
In this video, you will see how to use FLATTEN function. We suggest you to watch the video to understand the usage of FLATTEN formula.
Related Posts Worth Your Attention
Leave a Comment
|
{"url":"https://sheetsland.com/flatten-function/","timestamp":"2024-11-11T01:04:02Z","content_type":"text/html","content_length":"50086","record_id":"<urn:uuid:0db78271-8b94-4590-8540-0bdb670cd57c>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00713.warc.gz"}
|
Probability and statistics blog
For a JavaScript-based project I’m working on, I need to be able to sample from a variety of probability distributions. There are ways to call R from JavaScript, but they depend on the server running
R. I can’t depend on that. I need a pure JS solution.
I found a handful of JS libraries that support sampling from distributions, but nothing that lets me use the R syntax I know and (mostly) love. Even more importantly, I would have to trust the
quality of the sampling functions, or carefully read through each one and tweak as needed. So I decided to create my own JS library that:
• Conforms to R function names and parameters – e.g. rnorm(50, 0, 1)
• Uses the best entropy available to simulate randomness
• Includes some non-standard distributions that I’ve been using (more on this below)
I’ve made this library public at Github and npm.
Not a JS developer? Just want to play with the library? I’ve setup a test page here.
Please keep in mind that this library is still in its infancy. I’d highly recommend you do your own testing on the output of any distribution you use. And of course let me know if you notice any
In terms of additional distributions, these are marked “experimental” in the source code. They include the unreliable friend and its discrete cousin the FML, a frighteningly thick-tailed distribution
I’ve been using to model processes that may never terminate.
|
{"url":"https://statisticsblog.com/tag/r/","timestamp":"2024-11-02T07:49:11Z","content_type":"application/xhtml+xml","content_length":"77177","record_id":"<urn:uuid:e640a4b4-f4d1-4ee8-86d6-157ff58eab74>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00597.warc.gz"}
|
Age Dependence in Failure Rates: Exits of Semiconductor Manufacturing Firms - HKT Consultant
Theory of Organizational Ecology
Age Dependence in Failure Rates: Exits of Semiconductor Manufacturing Firms
Next we consider the lengths of “careers’’ in the semiconductor industry, the length of time between entry and exit of firms. Limitations on the data preclude an analysis with as much detail as just
reported for labor unions. Entries and exits are recorded only at yearly intervals in the data we used; so observed durations in the industry are also aggregated in time. Since more than half the
firms in the population had observed durations of three or fewer years in the semiconductor industry, the temporal aggregation is large relative to the time scale of events.^61
Given the temporal aggregation of these data, we display the qualitative pattern of variation in exit rates with length of time in the industry using the actuarial estimator of the hazard. Figure
10.8 plots the estimated haz-ard by years in the industry over the observed range. This plot suggests a pronounced liability of newness. The hazard during the first year in the industry is
considerably larger than for any other year. From this initial high level, the hazard of exit drops fairly consistently with number of years in the industry, falling to zero toward the upper end of
the observed range. In other words, the rate of exit drops roughly monotoni- cally with time in the industry. There does seem to have been a liability of newness.
Figure 10.8 Integrated hazard: semiconductor firm mortality (dashed lines indicate 95% confidence interval)
In the spirit of the previous section, we next explore whether the pattern of decline in the rate agrees with a Gompertz or a Weibull model. Recall that the Gompertz model implies that the log-hazard
is a linear function of elapsed time and the Weibull implies that it is a linear function of the logarithm of elapsed time. Figures 10.9 and 10.10 display these two plots. Both plots seem to fit
reasonably well. The correlation is slightly higher for the relation of the log-hazard to log-time (-.84 versus -.82), but this difference is too small to form the basis of a choice of
specifications. However, the linear relationship between the log-hazard and elapsed time, which follows from the Gompertz model, does not fit well in the lower range of the time scale. Note in Figure
10.9 that the first five observations (in time) fall above the regression line whereas the same observations fall around the regression line in Figure 10.10. This difference suggests that the
Gompertz model understates the hazard at short durations, the period of most interest from the perspective of arguments about a liability of newness. Therefore we use an approximation of the Weibull
model (and the generalized gamma model) in analyses in subsequent chapters that introduce the effects of changing covariates.
Figure 10.9 Log-hazard of semiconductor firm exiting by duration
Figure 10.10 Log-hazard of semiconductor firm exiting by log-duration
We also wanted to learn how conditions at time of founding and unob- served heterogeneity combined with aging in affecting exit rates. We have done this in two ways. In one, we assigned to each firm
the midyear of its final year as the date of exiting. In the other, we assigned each firm an exiting date that is the sum of the difference between its exiting year and entering year and a random
number uniformly distributed between zero and one. We then applied the variety of models discussed earlier in the chapter to these two sets of observations. The two analyses agree that the
generalized gamma model fits significantly better than any of its special cases. In particular, the results tell that there was substantial unobserved variation in exit rates and a strong liability
of newness.
The analyses reported in this chapter point in a fairly clear direction. We find that failure rates are monotonic functions of age (or duration in the industry, in the case of semiconductor firms).
We also find that the simple Weibull model appears to provide a better representation of this process than the Gompertz model, which has formed the basis of most previous research on this subject.
Finally, we find that adding unobserved heterogeneity sometimes improves the fit of Weibull models significantly but does not alter conclusions about the presence of age dependence in the rates. We
build on these results in the remaining chapters. In particular, we begin with Weibull models in analyses of density dependence and niche width, and we explore the consequences of adding unobserved
heterogeneity to these models. We begin in the next chapter with the effects of density on failure rates.
Source: Hannan Michael T., Freeman John (1993), Organizational Ecology, Harvard University Press; Reprint edition.
|
{"url":"https://sciencetheory.net/age-dependence-in-failure-rates-exits-of-semiconductor-manufacturing-firms/","timestamp":"2024-11-03T01:01:41Z","content_type":"text/html","content_length":"110293","record_id":"<urn:uuid:98d8b1af-6d4e-4b4b-bd23-871a28e70f42>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00703.warc.gz"}
|
What is the derivative of y=3x^2e^(5x) ? | Socratic
What is the derivative of #y=3x^2e^(5x)# ?
1 Answer
This is a product of the function $3 {x}^{2}$ and the function ${e}^{5 x}$, which is itself the composite of the functions given by ${e}^{x}$ and by $5 x$.
Thus we will need the product rule to the effect that:
$\left(3 {x}^{2} {e}^{5 x}\right) ' = \left(3 {x}^{2}\right) ' {e}^{5 x} + 3 {x}^{2} \left({e}^{5 x}\right) '$
As well as the fact that:
$\left(3 {x}^{2}\right) ' = 6 x$
Using these basic differentiation rules, and the Chain Rule combined with $\left({e}^{x}\right) ' = {e}^{x}$, we can calculate that:
$\left({e}^{5 x}\right) ' = {e}^{5 x} \setminus \times \left(5 x\right) ' = 5 {e}^{5 x}$.
As a result:
$\left(3 {x}^{2} {e}^{5 x}\right) ' = 6 x {e}^{5 x} + 15 {x}^{2} {e}^{5 x}$.
Impact of this question
14265 views around the world
|
{"url":"https://socratic.org/questions/find-the-indicated-derivative-d-dx-3x-2e-5x","timestamp":"2024-11-01T20:04:39Z","content_type":"text/html","content_length":"34127","record_id":"<urn:uuid:8a62ebb9-b72c-4be7-ae2e-c3700bc6f867>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00782.warc.gz"}
|
Contrastive Learning of Structured World Models
28 Nov 2019
• The paper introduces Contrastively-trained Structured World Models (C-SWMs).
• These models use a contrastive approach for learning representations in environments with compositional structure.
• The training data is in the form of an experience buffer \(B = \{(s_t, a_t, s_{t+1})\}_{t=1}^T\) of state transition tuples.
• The goal is to learn:
□ an encoder \(E\) that maps the observed states $s_t$ (pixel state observations) to latent state $z_t$.
□ a transition model \(T\) that predicts the dynamics in the hidden state.
• The model defines the enegry of a tuple \((s_t, a_t, s_{t+1})\) as \(H = d(z_t + T(z_t, a_t), z_{t+1})\).
• The model has an inductive bias for modeling the effect of action as translation in the abstract state space.
• An extra hinge-loss term is added: \(max(0, \gamma - d(z^{~}_{t}, z_{t+1}))\) where \(z^{~}_{t} = E(s^{~}_{t})\) is a corrputed latent state corresponding to a randomly sampled state \(s^{~}_{t}
Object-Oriented State Factorization
• The goal is to learn object-oriented representations where each state embedding is structured as a set of objects.
• Assuming the number of object slots to be \(K\), the latent space, and the action space can be factored into \(K\) independent latent spaces (\(Z_1 \times ... \times Z_K\)) and action spaces (\
(A_1 \times ... \times A_k\)) respectively.
• There are K CNN-based object extractors and an MLP-based object encoder.
• The actions are represented as one-hot vectors.
• A fully connected graph is induced over K objects (representations) and the transition function is modeled as a Graph Neural Network (GNN) over this graph.
• The transition function produces the change in the latent state representation of each object.
• The factorization can be taken into account in the loss function by summing over the loss corresponding to each object.
• Grid World Environments - 2D shapes, 3D blocks
• Atari games - Pong and Space Invaders
• 3-body physics simulation
• Random policy is used to collect the training data.
• Evaluation is performed in the latent space (no reconstruction in the pixel space) using ranking metrics. The observations (to compare against) are randomly sampled from the buffer.
• Baselines - auto-encoder based World Models and Physics as Inverse Graphics model.
• In the grid-world environments, C-SWM models the latent dynamics almost perfectly.
• Removing either the state factorization or the GNN transition model hurts the performance.
• C-SWM performs well on Atari as well but the results tend to have high variance.
• The optimal values of $K$ should be obtained by hyperparameter tuning.
• For the 3-body physics tasks, both the baselines and proposed models work quite well.
• Interestingly, the paper has a section on limitations:
□ The object extractor module can not disambiguate between multiple instances of the same object (in a scene).
□ The current formulation of C-SWM can only be used with deterministic environments.
|
{"url":"https://shagunsodhani.com/papers-I-read/Contrastive-Learning-of-Structured-World-Models","timestamp":"2024-11-02T20:50:55Z","content_type":"text/html","content_length":"13261","record_id":"<urn:uuid:05108d2c-b517-4268-b803-9ed103426ee2>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00132.warc.gz"}
|
Logic Seminar talk: On representability by arithmetic terms - Institute for Logic and Data Science
On April 18, 2024 at 14:00 EEST, Mihai Prunescu (University of Bucharest and IMAR) will give a talk in the Logic Seminar.
Title: On representability by arithmetic terms
Consider number-theoretic functions like \(\tau\), which represents the number of divisors of a natural number, \(\sigma\), which yields the sum of its divisors, or Euler’s totient function \(\varphi
\), which computes, for any \(n\), the number of residues modulo \(n\), which are relatively prime to \(n\). There are methods to compute these functions for a given argument \(n\) from the prime
number decomposition of \(n\), but it is difficult to imagine arithmetic closed terms in \(n\) alone, computing them. Yet, those functions are Kalmár-elementary and by the results of Mazzanti and
Marchenkov, such terms do exist. As well, closed arithmetic terms represent the \(n\)th prime and various other number-theoretical functions. I will show how such terms can be effectively
constructed. Work in progress.
The talk will take place physically at FMI (Academiei 14), Hall 214 “Google”.
|
{"url":"https://ilds.ro/2024/04/logic-seminar-talk-on-representability-by-arithmetic-terms/","timestamp":"2024-11-10T09:09:00Z","content_type":"text/html","content_length":"54033","record_id":"<urn:uuid:559b83f9-da10-4a57-8d59-0ffb9e732912>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00358.warc.gz"}
|
Soccer schedules in Europe: an overview | SpringerLink
J Sched (2012) 15:641–651
DOI 10.1007/s10951-011-0238-9
Soccer schedules in Europe: an overview
Dries R. Goossens · Frits C.R. Spieksma
Published online: 2 June 2011
© Springer Science+Business Media, LLC 2011
Abstract In this paper, we give an overview of the competition formats and the schedules used in 25 European
soccer competitions for the season 2008–2009. We discuss
how competitions decide the league champion, qualification
for European tournaments, and relegation. Following Griggs
and Rosa (Bull. ICA 18:65–68, 1996), we examine the popularity of the so-called canonical schedule. We investigate
the presence of a number of properties related to successive
home or successive away matches (breaks) and of symmetry
between the various parts of the competition. We introduce
the concept of ranking-balancedness, which is particularly
useful to decide whether a fair ranking can be made. We also
determine how the schedules manage the carry-over effect.
We conclude by observing that there is quite some diversity in European soccer schedules, and that current schedules leave room for further optimizing.
Keywords Soccer · Scheduling · Canonical schedule ·
Ranking-balancedness · Breaks · Mirroring · Carry-over
1 Introduction
Sports have become big business, and in Europe, the most
important sport is undoubtedly soccer. Soccer in Europe in-
Dries R. Goossens is PostDoc researcher for Research
D.R. Goossens () · F.C.R. Spieksma
Center for Operations Research and Business Statistics, Faculty
of Business and Economics, K.U. Leuven, Belgium,
Naamsestraat 69, 3000 Leuven, Belgium
e-mail: [email protected]
F.C.R. Spieksma
e-mail: [email protected]
volves millions of fans, and billions of euros have been paid
for broadcasting rights, advertising, and merchandizing. Europe is also the venue for thrilling competitions as for instance the Premier League, the Primera Division, and the
Champions League, which involve the richest and most successful teams in the world. Obviously, with those amounts of
money at stake, teams want to play their matches according
to a schedule that maximizes their revenue. The relationship between the schedule and match attendance has been
discussed by e.g. Scarf and Shi (2008) and Buraimo et al.
(2009), but maximizing revenue should not be done without taking into account the fact that a competition should be
attractive, fair, practicable, and safe for anyone involved.
Finding a good schedule is not an easy challenge, as
wishes from various stakeholders (the league, clubs, fans,
TV, police, etc.) are often conflicting. Moreover, whereas a
number of constraints are common for most competitions,
many leagues have their peculiarities. For example, in the
UK, travel distances for the fans must be minimized over
the Christmas and New Year period, as reported by Kendall
(2008). In regions where the same police force is responsible for guaranteeing the safety of several local clubs, avoiding too many derbies (Kendall et al. 2010), or simultaneous home games (Goossens and Spieksma 2009) is an issue. In other competitions, multiple television companies
hold broadcasting rights and the schedule should balance
the interesting matches over the rounds for each TV station
(see e.g. Della Croce and Oliveri 2006). In The Netherlands,
Schreuder (1992) reports that the railway schedule is taken
into account when scheduling the league, to make sure that
fans of rivaling clubs do not meet when taking the train to
attend a football match. Over the last decade, sport scheduling received more attention from researchers from fields
as operations research, computer science, and mathematics.
Kendall et al. (2010) give a recent overview of the research
Table 1 Overview of
competition formats
J Sched (2012) 15:641–651
Regular stage
Play-off stage
Czech Rep.
N. Ireland
done so far in sports scheduling, and classify the contributions according to the methodology used and the application,
where soccer turns out to be the most popular topic.
There are quite a few papers that present a solution approach for a specific soccer league in Europe: Schreuder
(1992) for The Netherlands, Bartsch et al. (2006) for Austria
and Germany, Della Croce and Oliveri (2006) for Italy, Rasmussen (2008) for Denmark, Flatberg (2009) for Norway,
and Goossens and Spieksma (2009) for Belgium. There are
also a couple of papers that try to classify sports scheduling
problems. Bartsch et al. (2006) give a survey of a number of
sports scheduling problems discussed in the literature, and
indicate what type of constraints occur. A more elaborate
classification of the various constraints involved is provided
by Nurmi et al. (2010). These authors present a framework
for a sports scheduling problem with 36 types of constraints,
modeled from various professional sports leagues, including
a set of artificial and real-world instances, with the best solutions found. Nevertheless, as far as we are aware, there is
only one paper that does not focus on the process of obtaining a solution, but instead exclusively focuses on the actual
solutions of sports scheduling problem: the schedules. Over
a decade ago, Griggs and Rosa (1996) published a short paper entitled “A tour of European soccer schedules, or testing the popularity of GK 2n ”. For the season 1994–1995,
they examined schedules of the highest division in 25 European soccer competitions given in Table 1. They focused on
identifying the competitions that made use of the so-called
“canonical schedule” (see Sect. 3), and found that it is used
in 16 of these competitions.
This paper can be seen as a follow-up of the work by
Griggs and Rosa (1996): we revisit the 25 competitions they
listed in 1996. These competitions still form a balanced sample of strong and weak soccer competitions in Europe. We
look at the schedules for season 2008–2009 (or the 2008
schedules for countries as Norway, where the soccer season
corresponds with the calendar year), and verify whether they
have a number of interesting properties. Thus, our goal in
this work is modest: to investigate the schedules according
to which today’s soccer competitions are being played. This
gives insights in the diversity of the presence of different
properties, and provides an answer to the question what features are apparently considered important in European soccer schedules. Notice that this type of information is usually
J Sched (2012) 15:641–651
not explicitly available, as the properties of a schedule often
result from compromises on meetings with members from
the association. Further, we will compare our findings with
those of Griggs and Rosa (1996) and comment on the potential of further optimizing today’s schedules. We also introduce the concept of ranking-balancedness, which compares
the number of home games played by each team after each
round, and allows to express whether a fair ranking can be
produced after each round.
In the remainder of this paper, when we discuss a competition, we mean its highest division, to which we refer as
the first division. We use n for the number of teams taking
part in a competition, and l for the number of matches between a pair of teams in a (stage of) a competition. Matches
are grouped in so-called “rounds”, meaning that they are
scheduled to be played on the same day or weekend. In
order to draw any conclusions about popular features in
a soccer schedule, it is important to consider the fixtures
as they were scheduled before the start of the season. We
got this information from websites as www.the-sports.org,
www.rsssf.com, and www.gooooal.com. These fixtures regularly differ from the order according to which the matches
are actually played. Indeed, it is not uncommon that several matches in a season are postponed because of weather
conditions, or conflicts with Champions League or Europa
League matches. In rare occasions, entire rounds are put off:
in Northern Ireland the (planned) first round was played after round 10, because of a strike of the referees (McCreary
In Sect. 2, we focus on the competition format, and the
number of teams participating. We examine the popularity of
the so-called “canonical schedule” in Sect. 3 and compare it
with the findings by Griggs and Rosa (1996). The symmetry
between the various parts of the competition, and the number of rounds between two successive encounters of each
team is the topic of Sect. 4. In Sect. 5, we look at how schedules deal with successive home (away) matches. The balancedness of home and away games is discussed in Sect. 6,
where we introduce the concept of ranking-balancedness.
Section 7 investigates to what extent the so-called “carryover effects” are balanced in the schedules of our 25 competitions. Finally, a conclusion is presented in Sect. 8.
2 Competition format
When we observe the 25 European soccer competitions in
Table 1, we notice that the number of teams taking part
varies between 10 (in Switzerland, Austria, and Malta) and
20 (in Italy, France, Spain, and England). The most popular number of teams is 18; no competition is played with
an odd number of teams. A larger or more populated country does not necessarily have more teams in its competition
(e.g., The Netherlands have two teams more than Russia),
but stronger competitions according to the UEFA League
ranking tend to have more teams. The number of teams occasionally changes, for instance when competitions choose
a new format. Compared to the season 1994–1995, when
Griggs and Rosa (1996) made their survey, five competitions
increased their number of teams by two (namely, Cyprus,
Italy, Luxembourg, Norway, and Scotland). On the other
hand, five competitions decreased their number of teams by
two (England, Poland, Portugal, Switzerland, and Wales),
and Northern Ireland even went from 16 to 12 teams.
All national soccer championships in Europe consist for
the main part (if not fully) of a round robin tournament.
A round robin tournament is a tournament where all teams
meet all other teams a fixed number of times. In 19 of the 25
competitions we investigated, a double-round robin tournament is played (i.e., each team meets each other team twice).
Slovakia, Scotland, Northern Ireland, and Ireland have a
triple-round robin tournament; in Austria and Switzerland,
the competition consists of a quadruple-round robin tournament.
In five of the competitions, the regular stage of the competition is followed by a play-off stage. Notice that it is not
clear from the beginning of the season which teams will
take part in the play-off stage, since this depends on their
performance in the regular stage. Consequently, no play-off
schedule can be made in the beginning of the season. The
goal of this play-off can be to decide the league champion,
to decide qualification for European tournaments (Champions League or Europa League), to claim promotion, or to
avoid relegation. In Northern Ireland, Scotland, Cyprus and
Malta, a play-off stage determines which team is the league
champion, and which teams qualify for Europe. In Northern
Ireland and Scotland, the play-off stage consists of a singleround robin tournament, played with the best six teams from
the regular stage. In Cyprus and Malta, the play-off stage
is a double-round robin tournament with the best four and
six teams, respectively, from the regular stage. Teams take
the points they collected in the regular stage with them to
the play-offs, except in Malta, where they keep only half of
these points. In the Netherlands, the league champion is decided after the regular stage, but the teams ranked six till
nine take part in a play-off to compete for the final Europa
League ticket. The format is a direct knock-out tournament,
where the confrontations are decided by a so-called best of
three legs. We make a distinction between promotion playoffs and relegation play-offs: the latter is contested solely
between teams from first division, in the former, at least
one team from the second division takes part. All competitions organize their promotion play-off in a direct knockout format, except for Belgium, where a double-round robin
tournament is organized. A relegation play-off is less common, but occurs in Northern Ireland and Scotland (singleround robin tournament with the six lowest-ranked teams),
Cyprus and Malta (double-round robin tournament with the
four lowest-ranked teams). In all relegation play-offs, teams
keep their points from the first stage, except in Malta, where
only half of the points is carried over. Notice that a promotion and a relegation play-off do not exclude each other: in
Northern Ireland, the one but last from the relegation playoff gets a second chance in the promotion play-off.
Clearly, the differences in league format and number of
teams, result in a different number of matches played per
team in different competitions. The one but last column in
Table 1 shows the number of league matches a team plays
during one season for each of the 25 competitions. In some
competitions, the number of matches played by a team depends on which play-off, if any, it qualifies for. On average,
a team plays 33.46 matches in a season. However, in Cyprus,
a team’s season can be finished after 24 games, whereas
a team from The Netherlands may have to contest no less
than 40 league games. Notice that we did not take into account the matches for the promotion play-off, because they
are in general not between first division teams, and organized by the association of second division teams. Although
most competitions have a tie-breaker like goal difference or
head-to-head results, in some competitions one or more “test
games” are needed to decide in case two teams end up with
the same number of points. In the season 2008–2009, this
happened in Malta, where relegation was settled in a single
test game between Msida St. Joseph and Tarxien Rainbows
(which ended in a penalty shootout), and in Belgium, where
Standard collected the league title in a thrilling two leg confrontation against Anderlecht.
The last column in Table 1 shows that none of the 25
competitions is a closed competition, since at least one team
from the second division promotes to the first division at
the end of the season. Romania catches the eye with a guaranteed promotion for four second division teams. Also in
Belgium, up to four teams could be relegated at the end of
the season 2008–2009, but this was a once-only event, since
the Belgian soccer association decided to reduce the league
to 16 teams in the next season. Ireland has a similar story:
since the competition shrinks from 12 teams in 2008 to 10
in 2009, it has been decided that exceptionally, three teams
will be relegated. On the other hand, in Wales, there was
only one team being relegated instead of two as prescribed
by the competition format, the reason being that Aberaman,
the second division champion, was barred from promotion.
In eight competitions, the number of teams being relegated
is not fixed, but depends on the outcome of the promotion
3 The canonical schedule
In this section, and the rest of the paper, we focus on the regular stage of the competition. Rasmussen and Trick (2008)
J Sched (2012) 15:641–651
define a schedule as “compact” or “temporally constrained”
when the number of rounds used is minimal. In the case of
an even number of teams, this means that every team plays
on every round. When more rounds are used than needed,
we say the schedule is “(temporally) relaxed”. Griggs and
Rosa (1996) point out that the schedules of Russia and The
Netherlands are relaxed. They quote geographical considerations to explain the schedule in Russia, but are surprised by
the Dutch schedule, for which they unsuccessfully tried to
complete the rounds with games played on separate “irregular” dates. Currently, however, all leagues follow a compact
Given a single-round robin tournament with an even
number of n teams, a schedule can be seen as a onefactorization of Kn , the complete graph with n nodes. The
nodes in this graph correspond to the teams, and an edge
between two nodes represents a match between the two corresponding teams. A one-factorization of Kn is a partitioning into edge-disjoint one-factors Fi with i = 1, . . . , n − 1.
A one-factor is a perfect matching, i.e., a set of edges such
that each node in the graph is incident to exactly one of these
edges. Each one-factor corresponds to a round in a compact
schedule and represents n/2 matches. One-factorizations
are a popular research topic, dating back to, as far as we
are aware, a paper by Kirkman (1847). Notice that a onefactorization does not necessarily impose an order of the
one-factors; if an order is fixed, we call it an ordered onefactorization. Notice also that a one-factorization does not
specify which team has the home advantage in a given match
(see Sect. 5).
There are many ways to construct a one-factorization
(see e.g., Mendelsohn and Rosa 1985), but undoubtedly,
the most popular method is the so-called “canonical onefactorization”, also known as GK2n . According to Mendelsohn and Rosa (1985), this method is at least a century
old, and can be found in most textbooks on graph theory.
The canonical one-factorization has its one-factors Fi for
i = 1, . . . , n − 1 defined as
Fi = (n, i) ∪ (i + k, i − k) : k = 1, . . . , n − 1
where the numbers i + k and i − k are expressed as one of
the numbers 1, 2, . . . , n − 1 (mod n − 1) (De Werra 1981).
Schedules that consist of rounds with pairings of the teams
as described in the canonical one-factorization (possibly
with a different ordering of the rounds than the regular ordering F1 , F2 , . . . , Fn−1 ), are called canonical schedules. One
particular ordering, namely F1 , F3 , . . . , Fn−1 , F2 , F4 , . . . ,
Fn−2 , results in a schedule known as the Berger pairing table, which is not uncommon in chess tournaments.
Before we can evaluate the popularity of the canonical
schedule, we need to solve a recognition problem: given a
schedule, is it canonical? Notice that given two rounds, corresponding to the one-factors F1 and F2 , and the team that
J Sched (2012) 15:641–651
Table 2 Symmetry schemes
Table 3 Overview of canonical
schedules, symmetry and
French scheme
English scheme
Inverted scheme
Czech Rep.
N. Ireland
plays the role of n in (1), we can easily construct the other
one-factors according to (1). Therefore, given a schedule,
it suffices to check, for each pair of rounds taking the role
of F1 and F2 , and each team taking the role of n, whether
the given schedule corresponds with the resulting canonical
schedule, in order to decide whether the given schedule is
Griggs and Rosa (1996) found that for the season 1994–
1995, 16 of the 23 compact schedules they examined were
based on a canonical 1-factorization. By the season 2008–
2009, this number decreased to 13. The second column
in Table 3 shows whether or not a competition uses the
canonical schedule; between brackets, the situation in 1994–
1995 is given. We point out that in Austria and Switzerland, the order of the rounds differs from the order as prescribed in (1). The canonical schedule was abandoned in
the Czech Republic, Poland, Ireland, Belgium, Germany,
and Norway. We know that the introduction of mathemat-
Eql parts
Intact rnds
English + English
Mirror + Inverted
ical programming played an important role in this change
for the latter three competitions (see Bartsch et al. 2006;
Goossens and Spieksma 2009; Flatberg 2009). Indeed, for
schedulers that rely on a manual approach, the canonical
schedule forms a familiar reference. On the other hand, the
canonical schedule was introduced in Russia, Switzerland,
and Northern Ireland; in the latter two competitions, this
went together with a change of competition format. Thus,
we conclude that the popularity of the canonical schedule
still holds, over a decade after the survey by Griggs and Rosa
4 Symmetry and separation
When focusing on the regular stage of the competition, we
notice that most schedules can be split into equal parts, such
that each part forms a single-round robin tournament. The
third column of Table 3 shows that this is the case in all
competitions except for England and Wales. Swapping two
rounds, however, would be sufficient to create equal parts in
these two competitions as well. In general, matches that are
grouped in a round in one part, are also grouped in the same
round in the other parts of the competition. Exceptions to
this rule are Norway and Scotland, as shown in the fourth
column of Table 3.
Usually, there is some symmetry between the order of
the rounds in the various parts of the competition (see Table 2). In most competitions (15 out of 25, including a.o.
Germany, Italy, and Spain), the second half of the competition is identical to the first, except that the home advantage
is inverted. In case of a third part, as in Northern Ireland and
Slovakia, the schedule for the first part is copied. This system is called mirroring. Another possibility is the so-called
French scheme, where matches in the first and the last round
are identical, as well as matches in round n − 1 + t and round
t + 1 with t = 1, 2, . . . , n − 2 (again with the home advantage inverted). Apart from France, this scheme is used in
Luxembourg, Russia, and the Czech Republic. In the English scheme (Drexl and Knust 2007), the opponents of the
first round of the second part are the same as in the last round
of the first part, and round n + t in the second part corresponds to round t in the first part, for t = 1, 2, . . . , n − 2.
Strangely enough, the English system is not used in England, but in Austria, between the first and the second, and
between the third and the fourth part (there is no relation between the second and the third part). The Swiss competition
consists of four parts, where the first two are mirrored, and
the final two follow an inverted scheme, meaning that the
rounds of the third part are repeated in reverse order in the
fourth round.
In five competitions (England, The Netherlands, Norway,
Scotland, and Wales), none of the above symmetry schemes
is used. The schedule in Wales is, however, very close to the
English scheme: swapping round 17 with round 19 would
be sufficient. Symmetry schemes are generally perceived as
a way to add fairness to the schedule, since they insert a considerable number of rounds between two meetings of most
pairs of teams. Indeed, meeting an opponent twice in a short
timespan would be advantageous when this opponent is
weakened by injuries or low morale because of a losing run.
However, symmetry schemes also limit the options, when
numerous wishes of various stakeholders need to be satisfied as well. In those competitions, a separation constraint
can be used when creating the schedule, enforcing that
there should be at least s rounds between two games with
the same opponents (see e.g., Rasmussen and Trick 2008;
Bartsch et al. 2006). The final column in Table 3 shows the
minimal number of rounds between two matches with the
same opponents. Since for mirrored schedules, there are exactly n − 1 rounds between all matches with the same opponents, s = n − 1. The French scheme results in s = n − 2,
J Sched (2012) 15:641–651
however, for the English and the inverted scheme, s = 1, because the last round of the first part corresponds with the
first round of the second part.
5 Breaks
Forrest and Simmons (2006) show that scheduling of home
games consecutively has a negative impact on attendance.
Therefore, it is desirable for each team to have a perfect
alternation of home and away games. Since in any round
robin schedule for an even number of teams this can be
achieved for at most two teams, most teams will have a series of two successive home games, or two successive away
games, which we call a “break”. In many competitions, it
is an important consideration to have a low total number of
breaks, and that a team does not have two (or more) successive breaks, meaning that it should not have more than
two successive home (away) games. The minimal number
of breaks for a single-round robin tournament with an even
number of teams is n − 2 (De Werra 1981). More in particular, De Werra (1981) shows that this can be achieved in
an ordered canonical schedule as follows: an edge (i, n) has
team i as the home side if i is odd, and team n as the home
side if i is even. Further, an edge (i + k, i − k) has i + k as
the home side if k is odd; i − k is the home side if k is even.
For a double-round robin tournament, a schedule with
2n − 4 breaks can easily be constructed from a single-round
robin schedule with a minimal number of breaks by using
the inverted scheme. If we want a mirrored double-round
robin schedule, the minimal number of breaks is 3n − 6,
and if n = 4, this can be achieved without a team having
successive breaks (De Werra 1981). However, if there is no
need for a schedule that consists of consecutive single-round
robin tournaments, we can limit the number of breaks to
n − 2, even if all teams meet each other team more than
twice. This is illustrated for a double-round robin tournament with six teams in Table 4, where rounds 1, 4, 5, 8, and
9 form a single-round robin tournament with n − 2 breaks.
Sometimes, competitions prefer to equally distribute the
breaks over the teams, although the minimum number of
breaks then increases to n for a single, and 2n for a doubleround robin tournament (De Werra 1980). This type of
schedule is called an “equitable schedule”. Starting from an
equitable single-round robin schedule, the French scheme is
a way to create an equitable double-round robin schedule.
The second column in Table 5 shows the number of
breaks in each competition, followed by (between brackets)
the ratio of this number of breaks and the minimal number of breaks of a schedule that consists of l single-round
robin tournaments (i.e., l(n − 2)). No competition has a
schedule where the number of breaks is minimal, but most
schedules do not exceed the minimal number of breaks with
J Sched (2012) 15:641–651
Table 4 A double-round robin
schedule for six teams with
n − 2 breaks
Table 5 Overview of breaks
and balancedness
Max series
Per team
|i,R | (k)
0 (3)
0 (2)
0 (2)
Czech Rep.
0 (2)
0 (1)
0 (2)
0 (2)
0 (2)
1 (2)
0 (2)
2 (3)
0 (2)
0 (3)
N. Ireland
2 (6)
0 (2)
0 (4)
0 (2)
0 (2)
0 (2)
1 (3)
1 (2)
0 (2)
0 (2)
0 (2)
2 (4)
more than 50%. In five competitions however, the number of breaks seems irrelevant, as they use over twice as
many breaks as needed. Urrutia and Ribeiro (2006) show
that a large number of breaks, and successive breaks, can
be advantageous to minimize travel distances. Whereas this
could be a motivation for the high number of breaks England, it is questionable whether this explains the situation in
The Netherlands, Scotland, Wales and Northern Ireland. The
third column in Table 5 shows the maximal number of consecutive home (away) games for each competition. In most
competitions, no team plays more than two home (away)
matches in a row. Exceptions are Luxembourg, Poland, and
Wales (4) and Northern Ireland, where Cliftonville found
five consecutive home games on its schedule (a welcome
compensation for a start with six away games in the first
eight rounds).
The fourth column of Table 5 gives the minimal and
maximal number of breaks per team for each competition.
Clearly, the various leagues have a very different assessment of the importance of an equal number of breaks. Most
competitions choose for mirroring, which leads not just to
more breaks than necessary in general, but also to an uneven distribution of the breaks: two teams without breaks,
three breaks for all other teams in a double-round robin tournament. France, Russia and the Czech Republic opt for an
equitable schedule, whereas in Wales and Northern Ireland,
the difference in number of breaks between two teams can
be huge. The fifth and sixth column show that less than one
third of the leagues has a team that starts with two away
games (or two home games). A similar observation can be
made for the last two rounds: in 80% of the competitions,
a break in the final round is avoided. A traditional argu-
ment is that the first matches set the tone for the following
to come, and thus starting with two away games could be
disadvantageous. Similarly, concluding the season with two
home games could present a decisive advantage for teams
still in the running for the league title or relegation. In Russia and Wales, these considerations are not relevant, since
their competitions allow breaks both on the second and on
the last round.
6 Balancedness
For reasons of fairness, it may be desirable that each team
plays approximately half of its games at home, and the
other half away. For each team i, we denote the number
of home (away) games played after round r as hi,r (ai,r ).
Moreover, we define i,r as the difference between home
games and away games played by team i after round r, i.e.
i,r = hi,r − ai,r . Knust and v. Thaden (2006) call a schedule balanced if for each team, the numbers of home and
away games played at the end of the season, differ by at most
one, or if |i,R | 1, where R is the final round. They also
show that a balanced home-away assignment always exists.
Nurmi et al. (2010) call a schedule k-balanced if the number
of home and away games for each team differ by at most k
after each round of the tournament. In other words, k corresponds to the maximal value for |i,r | over all teams i and
rounds r.
The eighth column in Table 5 shows the values for |i,R |;
between brackets the value for k is given. It is striking that
three competitions do not have a balanced schedule according to the measure by Knust and v. Thaden (2006): Luxembourg, Northern Ireland, and Wales. In these competitions, a team finished the competition with two home games
more than another team. In Slovakia, Ireland, and Scotland,
there are teams that end the competition having played one
home game more than some other teams. This is, however,
inevitable since a triple-round robin tournament (with an
even number of teams) is played in these competitions. In
Scotland, this is compensated in the play-off stage, a singleround robin tournament that offers the possibility for an extra home game for those teams that were at a disadvantage
in the regular stage. Throughout the season, the difference
between home and away games exceeds two at some point
in almost all competitions. As mentioned before in Sect. 5,
Cliftonville (Northern Ireland) played six away games in the
first eight rounds; nevertheless they ended the regular stage
having played two more home games than away games.
Another concern is to have a league table that offers a
fair ranking after each round. Given the advantage that a
home game offers, this would be the case if the number of
home games played by each team after each round is as balanced as possible. Since, as far as we are aware, no measure
J Sched (2012) 15:641–651
takes this into account, we introduce the concept of rankingbalancedness. We call a schedule g-ranking-balanced, if after each round, the difference between the number of home
games played by any two teams up till then is at most g, or
more formally if for each round r
max(hi,r ) − min(hj,r ) g.
It is trivial to show that a schedule with g = 0 does not
exist. Therefore, the most balanced schedule has g = 1,
meaning that after each round, a team played at most one
home game more or less than any other team. Rankingbalancedness measure is related to the balancedness as defined by Knust and v. Thaden (2006), but distinct. Indeed,
k-balancedness focuses on the difference between home and
away games for a team, whereas g-ranking-balancedness
deals with the difference in number of home games played
between teams after a round. The value for g for each competition is given in the final column of Table 5, and differs
from k for several schedules. In most competitions, the difference in number of home games played between teams
is small (g = 2), with Wales (g = 4) and Northern Ireland (g = 5) as exceptions. England has the most rankingbalanced competition, which is surprising, since the schedule of this league has not displayed much structure until
now. The Premier League is, however, the only competition
where the difference in home games played between any two
teams is never more than one.
7 The carry-over effect
Any schedule for a round robin tournament involves an order
in which each team meets its opponents. We say that a team
i gives a carry-over effect to a team j , if some other team t’s
game against i is followed by a game against team j . This is
particularly relevant in physical, body-contact sports. For instance, if team i is a very strong, tough-playing side, one can
imagine that its opponent, team t, is weakened by injuries or
fatigue, which could be an advantage for its next opponent,
team j . Moreover, the carry-over effect could also be relevant in a strictly psychological interpretation, when team t
loses confidence and morale after a severe loss against the
strong team i, again to the benefit of their next opponent,
team j . The opposite may be true if team i is a weak team.
Clearly, carry-over effects are unavoidable in any schedule,
but schedules can differ in the extent to which carry-over effects are balanced over the teams. We define cij as the number of times that team i gives a carry-over effect to team j
in a schedule. These values can be seen as the elements of
matrix C, which we call the carry-over effects matrix. The
degree to which the carry-over effects are balanced is typically measured by the so-called carry-over effects value,
2 (Russell 1980).
which is defined as i,j cij
J Sched (2012) 15:641–651
Fig. 1 The carry-over effect
Table 6 Schedule (a) and its carry-over effects matrix (b) for a singleround robin tournament with six teams
Table 6 shows an example of a schedule for a singleround robin tournament with six teams (a), and the corresponding carry-over effects matrix (b). For instance, c41 ,
the number of times that team D gives a carry-over effect
to team A, equals 3, since it happens three times that A’s
opponent played against team D in the previous round. Notice that according to Russell’s definition, the carry-over effect from the last round to the first is also counted, although
of course in practice this is meaningless. The carry-over effects value for this schedule is 60, which is actually minimal (Russell 1980).
The lowest carry-over effect value we may hope for in
a single-round robin tournament with n teams is n(n − 1).
This is the case when all non-diagonal entries of C equal 1,
and the diagonal entries equal zero. A schedule that achieves
this is called a balanced schedule. Russell (1980) presents
an algorithm that results in a balanced schedule when n is a
power of 2. For other values of n, the best known results are
by Anderson (1999). It is not hard to see that the canonical
schedule results in the maximal carry-over effect value.
We investigate to what extent soccer schedules in Europe manage to balance carry-over effects. We compute the
carry-over effects value for the first n − 1 rounds of the regular stage of each competition. For most competitions, these
n − 1 rounds form a single-round robin tournament. Figure 1
represents the carry-over effects value of each competition
on a scale where 0% (100%) represents the best (worst)
known result for a single-round robin tournament with n
teams. The carry-over effects value is given between brackets. The competitions with the best balanced carry-over effects are those whose schedules do not present much of the
structure discussed in the previous sections, namely England, Italy, Ireland, Scotland, and The Netherlands. It is also
remarkable that swapping a couple of rounds of a canonical
schedule, as was done in Austria and Switzerland, can drastically reduce the unbalancedness of the carry-over effects.
Unbalanced carry-over effects are regularly used in the
media to explain the outcome of a competition. This happened e.g. for the league title of Brann Bergen in Norway
(Flatberg 2009), and the relegation of Beveren in Belgium
(Geril 2007), in seasons when both competitions were using
the canonical schedule. In fact, in these competitions, unbalanced carry-over effects played an important role in the decision to quit using the canonical schedule. Recently, however, Goossens and Spieksma (2011) measured the influence
of carry-over effects using a dataset of over 10,000 matches
from Belgium’s first division. They find that the influence
of carry-over effects on the result and the goal difference
of a match is negligible, and conclude that a schedule with
unbalanced carry-over effects does not cause a significant
(dis)advantage for any team.
8 Conclusion
In this paper, we presented an overview of the competition
formats and schedules used in 25 European soccer competitions for the season 2008–2009. All competitions use a
round robin tournament in the regular stage; in five competitions this stage is followed by a play-off stage that decides the league title, qualification for European tournaments, and/or relegation. The number of teams in the competition, and the number of rounds varies considerably. All
competitions are open, guaranteeing at least one second division team to promote to the first division; in eight competitions the number of teams to be relegated is not fixed.
Perhaps surprisingly, 14 years after the Griggs and Rosa
(1996) investigation, it turns out that the canonical schedule
is still popular, as it is used in more than half of the competitions. The canonical schedule, however, results in the most
unbalanced carry-over effects, and this has been the reason
for at least two competitions to abandon it, although shuffling the rounds of the canonical schedule can already be
quite effective to reduce the carry-over effects value. Minimizing the number of breaks is not the most important objective when creating a schedule. Indeed, over half of the
competitions opts for a mirrored schedule, which uses 50%
more breaks than needed. Further, the vast majority of the
competitions prefers to have at least five rounds between two
matches between the same teams. In general, however, the
number of breaks is limited, teams rarely have two consecutive breaks. The Premier League is the only competition
where a ranking-balanced schedule is used, although the total number of home games played after every round for each
team does not differ by more than two in most other competitions.
In conclusion, if we look at the properties present in
the 25 schedules in our overview, we can say that there
J Sched (2012) 15:641–651
is a considerable diversity. Moreover, the popularity of the
canonical schedule shows that there is still potential when it
comes to optimizing soccer schedules. Indeed, the number
of canonical schedules is quite small compared to the total
number of feasible schedules. There is, however, a trend to
abandon the canonical schedule in competitions where more
advanced scheduling techniques are introduced. Therefore,
we would not be surprised if only a small minority of the
competitions will still be using the canonical schedule in another 14 years.
Anderson, I. (1999). Balancing carry-over effects in tournaments. In
F. Holroyd, K. Quinn, C. Rowley, & B. Webb (Eds.), Research
notes in mathematics: Vol. 403. Combinatorial designs and their
applications (pp. 1–16). London: Chapman & Hall/CRC.
Bartsch, T., Drexl, A., & Kroger, S. (2006). Scheduling the professional soccer leagues of Austria and Germany. Computers & Operations Research, 33(7), 1907–1937.
Buraimo, B., Forrest, D., & Simmons, R. (2009). Insights for clubs
from modeling match attendance in football. The Journal of the
Operational Research Society, 60(2), 147–155.
De Werra, D. (1980). Geography, games and graphs. Discrete Applied
Mathematics, 2(4), 327–337.
De Werra, D. (1981). Scheduling in sports. In P. Hansen (Ed.), Annals
of discrete mathematics: Vol. 11. Studies on graphs and discrete
programming (pp. 381–395). Amsterdam: North-Holland.
Della Croce, F., & Oliveri, D. (2006). Scheduling the Italian Football
League: an ILP-based approach. Computers & Operations Research, 33(7), 1963–1974.
Drexl, A., & Knust, S. (2007). Sports league scheduling: graph- and
resource-based models. Omega, 35, 465–471.
Flatberg, T. (2009). Scheduling the topmost football leagues of Norway. In EURO XXIII: book of abstract of the 23rd European Conference on Operational Research, Bonn, Germany (p. 240).
Forrest, D., & Simmons, R. (2006). New issues in attendance demand:
the case of the English football league. Journal of Sports Economics, 7(3), 247–266.
Geril, J. (2007). Ons budget voor transfers? Nul komma nul euro. Het
Nieuwsblad, February 2nd (VUM) [Dutch].
Goossens, D., & Spieksma, F. (2009). Scheduling the Belgian soccer
league. Interfaces, 39(2), 109–118.
Goossens, D., & Spieksma, F. (2011). The carry-over effect does
not influence football results. Journal of Sports Economics.
Griggs, T., & Rosa, A. (1996). A tour of European soccer schedules,
or testing the popularity of GK 2n . Bulletin of the ICA, 18, 65–68.
Kendall, G. (2008). Scheduling English football fixtures over holiday
periods. The Journal of the Operational Research Society, 59(6),
Kendall, G., Knust, S., Ribeiro, C., & Urrutia, S. (2010). Scheduling in
sports: an annotated bibliography. Computers & Operations Research, 37, 1–19.
Kendall, G., McCollum, B., Cruz, F., & McMullan, P. (2010). Scheduling English football fixtures: consideration of two conflicting objectives. In PATAT’ 10: proceedings of the 8th international conference on the Practice and Theory of Automated Timetabling
Belfast, UK (pp. 1–5).
Kirkman, T. (1847). On a problem in combinations. Cambridge and
Dublin Mathematical Journal, 2, 191–204.
Knust, S., & v. Thaden, M. (2006). Balanced home-away assignments.
Discrete Optimization, 3, 354–365.
J Sched (2012) 15:641–651
McCreary, M. (2008). All matches off as referees strike. Belfast Telegraph, August 8th (Independent News and Media).
Mendelsohn, E., & Rosa, A. (1985). One-factorizations of the complete
graph—a survey. Journal of Graph Theory, 9, 43–65.
Nurmi, K., Goossens, D., Bartsch, T., Bonomo, F., Briskorn, D., Duran,
G., Kyngås, J., Ribeiro, C., Spieksma, F., & Urrutia, S. (2010).
A framework for a highly constrained sports scheduling problems.
In IMECS’10: proceedings of the international MultiConference
of Engineers and Computer Scientists (Vol. III, pp. 1991–1997),
Hong Kong, March 17–19.
Rasmussen, R. (2008). Scheduling a triple round robin tournament for
the best Danish soccer league. European Journal of Operational
Research, 185, 795–810.
Rasmussen, R., & Trick, M. (2008). Round robin scheduling—a survey. European Journal of Operational Research, 188, 617–636.
Russell, K. (1980). Balancing carry-over effects in round robin tournaments. Biometrika, 67(1), 127–131.
Scarf, P. A., & Shi, X. (2008). The importance of a match in a tournament. Computers & Operations Research, 35(7), 2406–2418.
Schreuder, J. (1992). Combinatorial aspects of construction of competition Dutch Professional Football Leagues. Discrete Applied
Mathematics, 35(3), 301–312.
Urrutia, S., & Ribeiro, C. (2006). Maximizing breaks and bounding
solutions to the mirrored travelling tournament problem. Discrete
Applied Mathematics, 154, 1932–1938.
|
{"url":"https://studylib.es/doc/5581536/soccer-schedules-in-europe--an-overview---springerlink","timestamp":"2024-11-07T00:26:43Z","content_type":"text/html","content_length":"98475","record_id":"<urn:uuid:d0106519-7138-45eb-89b1-112258cbc49a>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00075.warc.gz"}
|
Design elements - Logic gate diagram | Electrical Drafting And Design Logic Gate
The vector stencils library "Logic gate diagram" contains 17 element symbols for drawing the logic gate diagrams.
"To build a functionally complete logic system, relays, valves (vacuum tubes), or transistors can be used. The simplest family of logic gates using bipolar transistors is called resistor-transistor
logic (RTL). Unlike simple diode logic gates (which do not have a gain element), RTL gates can be cascaded indefinitely to produce more complex logic functions. RTL gates were used in early
integrated circuits. For higher speed and better density, the resistors used in RTL were replaced by diodes resulting in diode-transistor logic (DTL). Transistor-transistor logic (TTL) then
supplanted DTL. As integrated circuits became more complex, bipolar transistors were replaced with smaller field-effect transistors (MOSFETs); see PMOS and NMOS. To reduce power consumption still
further, most contemporary chip implementations of digital systems now use CMOS logic. CMOS uses complementary (both n-channel and p-channel) MOSFET devices to achieve a high speed with low power
dissipation." [Logic gate. Wikipedia]
The symbols example "Design elements - Logic gate diagram" was drawn using the ConceptDraw PRO diagramming and vector drawing software extended with the Electrical Engineering solution from the
Engineering area of ConceptDraw Solution Park.
|
{"url":"https://www.conceptdraw.com/examples/electrical-drafting-and-design-logic-gate","timestamp":"2024-11-04T22:05:30Z","content_type":"text/html","content_length":"28433","record_id":"<urn:uuid:49770277-c97b-43e4-b42a-7a2c63b14198>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00705.warc.gz"}
|
Kepler's Third Law
Kepler's laws of motion describe the orbits of planets around the Sun. They were were a bridge between the Aristotlean view of the Solar System, which described in error and did not explain, and the
Newtonian view, which described (almost) correctly and explained in scientific terms. Kepler's Laws of motion described (almost) but did not explain any basis. Kepler's Laws of motion were based
purely on astronomical observations carried out over many years.
Consider an artificial satellite in orbit around the Earth. Ideally they do not require engines to keep them in orbit – the gravitational pull from the Earth supplies the centripetal force.
For a circular orbit, centripetal force = gravitational force.
There are equations for each of these forces.
The equation for the centripetal force is
The equation for the gravitational force is
We can equate these to obtain
The satellite orbits the Earth, travelling a distance
Equation (1) and (2) give
|
{"url":"https://astarmathsandphysics.com/ib-physics-notes/mechanics/1373-kepler-s-third-law.html","timestamp":"2024-11-09T12:21:52Z","content_type":"text/html","content_length":"33096","record_id":"<urn:uuid:8b7ee0b1-d250-4dda-9893-4be929b94889>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00722.warc.gz"}
|
Solved Exercise, Chem-11, Ch-03 - Exceptional Educations
Solved Exercise, Chem-11, Ch-03
Q.01: Select the most suitable answers from the given ones in each question:
(i) Pressure remaining constant at which temperature the volume of a gas will become twice of what it is at 0^oC:
(a) 546^oC
(b) 200^oC
(c) 546 K
(d) 273 K
Ans: (c) 546 K
EXPLANATION: When a gas is present at 0^oC or 273 K, and its temperature is increased by 1^oC, its volume will increase by 1/273 of its original volume V[o] at 0^oC i.e., its new volume will be V[o]
+ V[o]/273. In this way, if temperature is increased from 0^oC to 273^oC, the volume of the gas will become double.
(ii) Number of molecules in one dm^3 of water is close to:
(a) 6.02/22.4× 10^23
(b) 12.04/22.4× 10^23
(c) 18/22.4× 10^23
(d) 55.6 × 6.02 × 10^23
Ans: (d) 55.6 × 6.02 × 10^23
EXPLANATION: 1 dm^3 of water is equal to 1 kg of water because density of water is 1. One kg water contains 55.6 moles (Formula: n=m/M ➣1000/18=55.6). We know 1 mole of a compound contains 6.02 ✕ 10^
23 molecules, so 55.6 moles of water contain 55.6 ✕ 6.02 ✕ 10^23 molecules of water.
(iii) Which of the followings, will have the same number of molecules at STP?
(a) 280 cm^3 of CO[2] and 280 cm^3 of N[2]O
(b) 11.2 dm^3 of O[2] and 32 g of O[2]
(c) 44 g of CO[2] and 11.2 dm^3 of CO
(d) 28 g of N[2] and 5.6 dm^3 of oxygen
Ans: (a) 280 cm^3 of CO[2] and 280 cm^3 of N[2]O
EXPLANATION: 280 cm^3 of CO[2] and 280 cm^3 of N[2]O contain equal number of molecules because at STP, same volumes of different gases contain same number of moles as well as molecules (Avogadro’s
Law). Since, both gases have same volumes, so their number of molecules are also the same.
(iv) If absolute temperature of a gas is doubled and the pressure is reduced to one half, the volume of the gas will be:
(a) Remain unchanged
(b) Increase four times
(c) Reduced to 1/4
(d) Be doubled
Ans: (b) Increase four times
EXPLANATION: If absolute temperature of gas is doubled, its volume will become double because volume is directly proportional to absolute temperature (Charles’ Law). When pressure on the same gas is
reduced to one half, its volume will double again (Boyle’s Law). Thus, the net result will be four times increase in volume of the gas.
(v) How should the conditions be changed to prevent the volume of a given gas form expanding when its mass is increased?
(a) Temperature is lowered & pressure is increased.
(b) Temperature is increased & pressure is lowered.
(c) Temperature and pressure both are lowered.
(d) Temperature and pressure both are increased.
Ans: (a) Temperature is lowered & pressure is increased.
EXPLANATION: When mass of a gas is increased, its volume tends to be increased. If we want to prevent it from expanding, we will have to decrease its temperature and increase pressure on it, because
lowering the temperature will decrease the motion of gas molecules and the molecules will come closer. Similarly, by increasing pressure, the molecules of gas come closer and the gas is finally
(vi) The molar volume of CO[2] is maximum at:
(a) STP
(b) 127^oC and 1 atm
(c) 0^oC and 2 atm
(d) 273^oC and 2 atm
Ans: (b) 127^oC and 1 atm
EXPLANATION: The volume of a gas is directly proportional to temperature and inversely proportional to pressure, so it will be maximum at high temperature and low pressure. When we calculate the
volume of one mole of CO[2] under different conditions of temperature and pressure as given in the options above, by using general gas equation (PV=nRT), we come to know that it will be maximum at
127^oC temperature and 1 atm pressure.
(vii) The order of the rate of diffusion of gasses NH[3], SO[2], Cl[2], and CO[2] is:
(a) NH[3]>SO[2]>Cl[2]>CO[2]
(b) NH[3]>CO[2]>SO[2]>Cl[2]
(c) Cl[2]>SO[2]>CO[2]>NH[3]
(d) NH[3]>CO[2]>Cl[2]>SO[2]
Ans: (b) NH[3]>CO[2]>SO[2]>Cl[2]
EXPLANATION: The rate of diffusion of a gas is inversely proportional the the square root of the its density or molar mass (Graham’s Law of Diffusion). Keeping in mind the molar masses of these
gases, we conclude that their crates of diffusion will be in order of NH[3]>CO[2]>SO[2]>Cl[2].
(viii) Equal masses of CH[4] and oxygen are mixed in an empty container at 25^oC. The fraction of total pressure exerted by oxygen is:
(a) 1/3
(b) 8/9
(c) 1/9
(d) 16/17
Ans: (a) 1/3
EXPLANATION: The pressure exerted by a gas in a mixture of gases is directly proportional to the number of moles or molecules of that gas. The greater the mole fraction of a gas in a mixture, the
greater will be its partial pressure and vice versa. The molar mass of oxygen (32 g mole^-1) is twice the molar mass of methane (16 g mole^-1). When equal masses of these two gases are taken, the
mole fraction of oxygen will be half to that of methane. So, if total pressure is 3 atm, 2/3 of it will be exerted by methane while 1/3 will be exerted by oxygen.
(ix) Gases deviate from ideal behavior at high pressure. Which of the followings is correct for non-ideality?
(a) At high pressure, the gas molecules move in one direction only.
(b) At high pressure, the collisions between the gas molecules are increased manifold.
(c) At high pressure, the volume of the gas becomes insignificant.
(d) At high pressure, the intermolecular attractions become significant.
Ans: (d) At high pressure, the intermolecular attractions become significant.
EXPLANATION: At high pressure the gas molecules close to each other, their actual volume does not remain negligible as compared to the total volume of the vessel, and also they start attracting each
other. So, the free volume available to gas molecules as well as the pressure exerted by them, do not match the values as predicted by ideal gas equation. The gas is now said to be deviating from
(x) The deviation of gas form ideal behavior is maximum at:
(a) –10^oC and 5.0 atm
(b) –10^oC and 2.0 atm
(c) 100^oC and 2.0 atm
(d) 0^oC and 2.0 atm
Ans: (a) –10^oC and 5.0 atm
EXPLANATION: Gases deviate from ideal behaviour at low temperature and high pressure because under these conditions, the gas molecules come close to each other. Their mutual distances are reduced so
the actual volume of gas molecules does not remain negligible as compared to total volume of the vessel. Forces of attraction between gas molecules also start operating at lower distances, so real
pressure becomes less than that predicted by ideal gas equation. Keeping in mind all these facts and putting the above values in van der Waal’s equation, the gas seem to deviate most from ideality at
–10^oC temperature and 5.0 atm pressure.
(xi) A real gas obeying van der Waals equation will resemble ideal gas if:
(a) Both “a” and “b” are large
(b) Both “a” and “b” are small
(c) “a” is small and “b” is large
(d) “a” is large and “b” is small
Ans: (a) Both “a” and “b” are large
EXPLANATION: ‘a’ is called the attraction co-efficient. Lower is the value of attraction co-efficient of a gas, smaller will be the attraction between gas molecules, and less will be the deviation
from ideal behaviour, and vice versa. ‘b’ is called excluded volume which is four times the actual volume of the gas molecules. The lower is the value ‘b’, greater will be the volume available for
the free movement of gas molecules, and as a result less will the deviation from ideality. So, for a gas to resemble an idea gas, its ‘a’ and ‘b’ values must be small.
Q.02: Fill in the blanks.
(i) The product PV has the S.I. unit of ________.
(atm dm^3)
(ii) Eight gram of each O[2] and H[2] at 27^oC will have the K.E. in the ratio of _________.
(iii) Smell of cooking gas during leakage from a gas cylinder is due to the property of ________ of gases.
(iv) Equal ________ of the ideal gases at the same temperature and pressure contain ________ number of molecules.
(v) The temperature above which a substance exists only as a gas is called _________.
(critical temperature)
Q.03: Indicate true or false as the case may be.
(i) Kinetic energy of molecules of gas is zero at 0^oC.
Correct: Kinetic energy of molecules of gas is zero at 0 K.
(ii) A gas in a closed cylinder will exert much higher pressure at the bottom due to gravity than at the top.
Correct: Gas molecules exert equal pressure on all sides in a closed container because gravity does not affect the motion of gas molecules..
(ii) Real gases show ideal gas behavior at low pressure and high temperature.
(iv) Liquefaction of gases involves decrease in intermolecular spaces.
(v) An ideal gas on expansion will show Joul-Thomson effect.
Correct: A highly compressed gas on sudden expansion from a small hole or jet produces cooling called Joule-Thomson effect.
Q.4(a) What is Boyle’s law of gases? Give its experimental verification.
Ans: Boyle’s law is stated as follows: “The volume of a given mass of a gas at constant temperature is inversely proportional to the pressure applied to the gas.”
So, V∝1/P (when the temperature and number of moles are constant)
or V=k/p
PV = k (when T & n are constant) ___________ (1)
‘k’ is proportionality constant. The value of k is different for different amounts of the same gas.
According to the equation (1), Boyle’s law can also be defined as: “The product of pressure and volume of a fixed amount of a gas at constant temperature is a constant quantity.”
So, P[1]V[1] = k and P[2]V[2] = k
Hence, P[1]V[1] = P[2]V[2]
Experimental Verification of Boyle’s Law:
Let us take a gas in a cylinder having a moveable piston. The cylinder is also attached with a manometer to read the pressure of the gas directly. Let the initial volume of gas is 1 dm^3 and its
pressure is 2 atmosphere when the piston has one weight on it. When the piston is pressed twice with the help of two equal weights, the pressure becomes four atmosphere. Similarly, when the piston is
loaded with a mass three times greater, then the pressure becomes six atmosphere. The initial volume of the gas at two atmosphere is 1 dm^3. It is reduced to 1/2 dm^3 and then 1/3 dm^3 with increase
of weights, respectively.
P[1]V[1]= 2 atm x 1 dm^3= 2 dm^3 atm = k
P[2]V [2]= 4 atm x 1/2 dm^3= 2 dm^3 atm = k
P[3]V[3]= 6 atm x 1/3 dm^3 = 2 dm^3 atm = k
Hence, Boyle’s law is verified.
The value of k will remain the same for the same quantity of a gas at the same temperature
(b) What are isotherms? What happens to the positions of isotherms when they are plotted at higher temperature for a particular gas?
Isotherm: ‘Iso’ means ‘same’ and ‘therm’ means ‘heat’. Isotherm is the curve obtained when the pressure of a gas is plotted as abscissa on x-axis and volume as ordinate on y-axis, keeping the
temperature constant. Isotherms give graphical explanation to Boyle’s law.
When isotherms are plotted at higher temperatures, they also become elevated in positions, i.e., they move away from both pressure and volume axes. This shows that at higher temperatures, the gases
acquire greater volumes than at lower temperatures.
(c) Why do we get a straight line when pressure exerted on a gas are plotted against inverse of volume? This straight line changes its position in the graph by varying the temperature. Justify it.
The graph between P and 1/V is a straight line which shows that both are directly proportional to each other. This is because 1/V is the inverse of V. When pressure on a gas is increased, 1/V also
increases. It means the volume of the gas decreases. This is in accordance with Boyle’s law. When the same graph is plotted at higher temperature, say T[2], the straight line moves towards pressure
axis (y-axis). This shows that at higher temperature, 1/V decreases, so the volume of the gas increases.
(d) How will you explain that the value of constant k in the equation PV=K depend upon
(i) The temperature of gas
(ii) The quantity of gas
Ans: In Boyle’s law, the value of proportionality constant ‘k’ depends upon temperature and quantity of gas. This is because ‘k’ is actually the product of pressure and volume of the gas. At constant
temperature and quantity of gas, an increase in pressure will decrease the volume, keeping PV constant. But, when T and n are also increased along with pressure, the volume will not decrease as
previous. It will have a greater value depending upon temperature and the quantity of gases added. So, the value of constant ‘k’ also changes accordingly.
Q.05: What is the Charles’s law? Which scale of the temperature is used to verify that V/T=K (pressure and number of moles are constant)
Ans: Charles’ law is a quantitative relationship between temperature and volume of a gas and was given by French scientist J. Charles in 1787. According to this law: “The volume of the given mass of
a gas is directly
proportional to the absolute temperature when the pressure is kept constant.” i.e.,
V ∝ T (P & n are constant)
V = kT
Absolute or Kelvin scale of temperature is used to verify The Charles’ equation. i.e., V/T=k
(b) A sample of carbon monoxide gas occupies 150.0 ml at 25.0^oC. It is then cooled at constant pressure until it occupies 100.0 ml. What is the new temperature?
V[1] = 150 ml
V[2] = 100 ml
T[1] = 25^oC + 273 288 K
T[2] = ?
Formula Applied:
V[1]/T[1] = V[2]/T[2]
T[2] = V[2]T[1]/V[1]
T[2] = 100ml ✕ 288 K / 150 ml = 198.6 K
Ans: T[2] = 198.67 K or –74.33^oC.
(c) Do you think that the volume of any quantity of a gas becomes zero at -273.16^oC. Is it not against the law of conservation of mass? How do you deduce the idea of absolute zero from this
Ans: According to the definition of absolute zero, at –273.16^oC, the volume of every gas becomes equal to zero. But, practically, it is not possible because zero volume means no volume. It means gas
will no longer exist at that temperature. This is against the law of conservation of mass which says that matter can neither be created nor destroyed, though it may be changed from one form into
another. Actually, no gas can achieve this temperature, because all the gases become liquid and solids before attaining this temperature.
Q.06(a): What is Calvin scale of temperature? Plot a graph for one mole of a real gas to prove that a gas becomes liquid, early that -273.16^oC.
Absolute or Kelvin Scale: This is a scale of temperature at which the melting point of ice at 1 atmospheric pressure is 273K, and the boiling point of water is 373K or more precisely 373.16K.
Temperature on Kelvin scale = Temperature °C + 273.16
Graph showing that one mole of a real gas changes into a liquid and then solid before achieving absolute zero temperature.
(b) Throw some light on the factor 1/273 in Charles’ law.
According to quantitative definition of Charles’s law, if a gas is kept at constant pressure at 0^oC. Then for every one degree rise or fall of temperature, the volume of the gas will increase or
decrease by of its original volume at 0^oC. In this way, volume of a gas, at any temperature ’t’, can be calculated with the help of this factor. i.e.,
According to this equation, if the temperature of a gas is increased from 0^oC to 273^oC, the volume of the gas will become double. And, if the temperature is decreased from 0^oC to –273^oC, the
volume of the gas will become zero.
Q.07(a): What is the general gas equation? Derive it In various forms.
General Gas Equation: All gas laws can be combined to form general gas equation as:
According to Boyle’s law, V ∝ 1/P (When ‘n’ and ‘T’ are held constant)
According to Charles’s law, V ∝ T (When ‘n’ and ‘P’ are held constant)
According to Avogadro’s law, V ∝ n (When ‘P’ and ‘T’ are held constant)
When none of the variables are kept constant, the above three equations can be combined as:
V ∝ nT/P
V = constant nT/P
V = R nT/P
PV = nRT
This equation is called general gas equation or ideal gas equation. ‘R’ is general gas constant or universal gas constant. This equation shows that if we have any quantity of an ideal gas, the
product of pressure and volume is equal to the product of number of moles, general gas constant and absolute temperature.
(b) Can we determine the molecular mass of an unknown gas if we know the pressure, temperature and volume along with the mass of that gas?
Ans: Yes, we can determine the molecular mass of an unknown gas if we know the pressure, temperature and volume along with the mass of that gas, by applying the following modified form of general gas
PV = nRT
Or PV = m/M ✕ RT
Or M = m/PV ✕ RT
(c) How do you justify from general gas equation that increase in temperature or decrease of pressure decreases the density of the gas?
Ans: According to general gas equation:
PV = nRT
Or PV = m/M ✕ RT
Or PM = m/V ✕ RT
Or PM = dRT (∵ m/V=d)
Or d=PM/RT
This relation shows that density of gas is inversely proportional to temperature and inversely proportional to pressure. So, if temperature is increased, density of the gas will decrease and vice
versa. Similarly, if pressure on the gas is decreased, density will also decrease and vice versa.
(d) Why do we feel comfortable in expressing the densities of gases in the units of g dm^-3 rather than gcm^-3, a unit which is used to express the densities of liquids and solids?
Ans: The densities of gases are so small that expressing them in normal units of g cm^-3 make calculations difficult. So, it is more convenient to express them in g dm^-3. While, the densities of
liquids and solids are high enough to be expressed in normal units of g cm^-3.
Q.08(a): Derive the units for gas R in the general gas equation:
(a) When the pressure in atmosphere and volume in dm^3.
Ans: According to Avogadro’s law, the volume of one mole of a gas at STP is 22.414 dm^3. Putting these values of P, T, V & n in general gas equation, the value of R at STP can be calculated:
PV = nRT or R = PV/RT
(b) When pressure is in Nm^-2 and volume in m^3.
Ans: The SI unit of pressure is Nm^-2 and that of volume is m^3. Applying SI units of P, T & V in the formula R = PV/ nT , the value of R can be calculated.
(c) When energy is expressed in ergs.
Ans: The value of R in Joules mol^-1K^-1:
R = 8.314J mol^-1K^-1
We know, 1J = 1 ✕ 10^-7 ergs
Or 1 J = 10000000 ergs
So, the value of R when energy is taken in ergs is:
R = 8.314 ✕ 10^7 ergs mol^-1K^-1
Q.09(a): What is Avogadro’s law of gases?
Avogadro’s Law: Avogadro’s law states that: “Equal volumes of all the ideal gases at same temperature and pressure contain equal number of molecules.” For example,
22.414 dm^3 of H[2] at STP = 1 mole of H[2] contain = 6.02 × 10^23 molecules of H[2]
22.414 dm^3 of CH[4] at STP = 1 mole of CH[4] contain = 6.02 × 10^23 molecules of CH[4]
(b) Do you think 1 mole of H[2] and 1 mole of NH[3] at 0^oC and 1 atm pressure have Avogadro’s number of molecules?
No, 0^oC is such a low temperature. Under 1 atm pressure, 1 mole of H[2], being very small non-polar molecules, may exist as independent molecules, so contain Avogadro’s number (6.02 × 10^23) of
molecules, but 1 mole of NH[3], being highly polar molecules with strong hydrogen bonding, doesn’t exist as independent molecules but in the form of irregular groups. So, 1 mole of NH[3] at 0^oC & 1
atm pressure, doesn’t have Avogadro’s number of molecules.
(c) Justify that 1 cm^3 of H[2] and 1 cm^3 of CH[4] at STP will have same number of molecules, when one molecule of CH[4] is 8 times heavier than that of hydrogen.
According to Avogadro’s law, equal volumes of all gases, under same conditions of temperature and pressure, contain equal number of molecules. So, 1 cm^3 each of H[2] and CH[4] at STP also contains
equal number of molecules, no matter what their masses are. This is due to the reason that the volumes of gases are affected by the number not by the nature of molecules because they are so far apart
(about 300 times of their diameters) that there masses or sizes don’t disturb their positions.
Q.10(a): Dalton’s law of partial pressure is only obeys by those gases which don’t have attractive forces among their molecules. Explain it.
Dalton’s law of partial pressures is obeyed only by those gases whose molecules don’t attract each other. This is because if there were attractions among different gas molecules, the total pressure
will become less than the sum of the individual partial pressures of the component gases, which is against Dalton’s law.
(b) Derive an equation to find out partial pressure of a gas knowing the individual moles of a component gases and the total pressure of the mixture.
(c) Explain that the process of reparation obeys the Dalton’s law of partial pressures.
The process of respiration depends upon the difference in partial pressures of oxygen inside the lungs and in external air. When animals inhale, O[2] moves from air into their lungs and diffuses into
blood, because the partial pressure of O[2 ]in air is 159 torr, while in lungs’ blood is 116 torr. During exhalation, CO[2 ]moves in opposite direction i.e., from blood in lungs into external air, as
its partial pressure is more in the blood than in outer air.
(d) How do you differentiate between diffusion and effusion? Explain grahams law of diffusion.
1. The spontaneous mixing of different gas molecules by random motion and collision to form homogeneous mixture is called diffusion.
2. Collision of molecules is necessary for diffusion.
3. It takes place through an open surface.
4. Examples: Spreading of fragrance or perfumes in a room
1. The escape of gas molecules one by one into a region of low pressure through a very small hole is called effusion.
2. Collision of molecules is not necessary for effusion.
3. It takes place through an extremely small hole.
4. Examples: Leakage of air through small holes in balloons and tyre tubes.
Q.11(a): What is critical temperature of a gas? What is its importance for liquefaction of gases? Discuss Lind’s method of liquefaction of gases.
Ans: A gas can be liquefied by applying pressure only when it is present at or below its critical temperature. Above its critical temperature, a gas can never be liquefied how much pressure may be
applied. This is because, above this temperature, the gas molecules have so much high energy that intermolecular forces cannot bind them into liquid. So, critical temperature is an essential
criterion to be considered for the liquefaction of gases.
(b) What is Joule-Thomson effect? Explain its importance in Linde’s method of liquefaction of gases.
Joule-Thomson Effect: “When a highly compressed gas is allowed to expand into a region of low pressure, it gets cooled. This is called Joule-Thomson effect.”
Explanation: In highly compressed state, the gas molecules are very close to each other, so appreciable attractive forces are present among them. When this gas is released into an area of low
pressure, its molecules try to move apart. For this purpose, they require energy to overcome the intermolecular forces. For this purpose, the molecules utilize their own energy and, as a result,
become cool.
Q.12(a) What is the kinetic molecular theory of gases? Give its postulates.
Kinetic Molecular Theory of Gases:
Bernoulli (1738) put forward molecular theory of gases to explain the behaviour of gases quantitatively. This theory provides a framework for understanding the macroscopic properties of gases such as
pressure, volume and temperature, in terms of the microscopic behaviour of their constituent particles such as atoms or molecules.
Following are the fundamental postulates of this kinetic theory of gases.
1. Every gas consists of a large number of very small particles called molecules. Gases like He, Ne, Ar have monoatomic molecules.
2. The molecules of a gas move haphazardly, colliding among themselves and with the walls of the container and change their directions.
3. The pressure exerted by a gas is due to the collisions of its molecules with the walls of a container. The collisions among the molecules are perfectly elastic.
4. The molecules of a gas are widely separated from one another and there are sufficient empty spaces among them.
5. The molecules of a gas have no forces of attraction for each other.
6. The actual volume of molecules of a gas is negligible as compared to the volume of the gas.
7. The motion imparted to the molecules by gravity is negligible as compared to the effect of the continued collisions between them.
8. The average kinetic energy of the gas molecules varies directly as the absolute temperature of the gas.
(b) How does kinetic molecular theory of gases explain the following gas laws?
(i) Boyle’s Law
(ii) Charles’ Law
(iii) Avogadro’s law
(iv) Graham’s law of diffusion
Ans: Kinetic theory of gases gives birth to kinetic equation of gases, which can be employed to justify the
gas laws. In other words, it proves that gas laws get their explanation from kinetic theory of gases.
(i) Boyle’s Law:
(i) Charles’ Law:
(i) Avogadro’s Law:
(i) Graham’s Law of Diffusion:
Q.13(a) Gases show non-ideal behavior a low temperature and high pressure. Explain this with the help of graph.
Ans: Gases show non-ideal behaviour at low temperature and high pressure. This can be explained by drawing a graph between pressure (P) on x-axis and compressibility factor PV/nRT on y-axis. The
compressibility factor PV/nRT has a constant value equal to unity under all conditions for an ideal gas. This constant shows that the increase of pressure decreases the volume in such a way that PV/
nRT remains constant at a constant temperature. So, a straight line is obtained parallel to the pressure axis, as shown in the figure. The real gases, however, show marked deviations from this
behaviour. For example:
• The graph for helium gas goes along with the expected horizontal dotted line to some extent but goes above this line at very high pressures. lt means that at very high pressure the decrease in
volume is not according to general gas equation and the value of PV/nRT has increased from the expected values. So, Helium becomes non-ideal.
• In the case of H[2], the deviation starts even at low pressure in comparison to He.
• N[2] shows a decrease in PV/nRT value at the beginning and shows marked deviation even at low pressure than H[2].
• CO[2] has a very strange behaviour as it is evident from the graph.
When we study the behaviour of all these four gases at elevated temperature i.e., 100^oC, the graphs come closer to the expected straight line and the deviations are shifted towards higher pressure.
This means that the increase in temperature makes the gases ideal. (Fig. 2)
This discussion, bases on experimental observations, convinces us that:
• Gases are ideal at low pressure and non-ideal at high pressure.
• Gases are ideal at high temperature and non-ideal at low temperature.
(b) Do you think some postulated of KMT are faulty? Point out these postulates.
Faulty Postulates of Kinetic Molecular Theory: The real gases deviate from ideal behaviour due to the two faulty postulates of kinetic molecular theory. These postulates are:
1. ‘There are no forces of attraction among the molecules of a gas.’ This is true at low pressure and high temperature when the molecules are far apart. But at high pressure and low temperature, the
gas molecules are very close to each other and attractive forces operate among them. So, the gases deviate from ideality.
2. ‘The actual volume of the gas molecules is negligible as compared to the volume of the vessel.’ This is also somewhat true at low pressure and high temperature, but at high pressure and low
temperature, when the gas molecules come close to each other, the actual volume of the gas molecules does not remain negligible. So, the gases deviate from ideality.
(c) Hydrogen and helium are ideal at room temperature, but SO[2] and Cl[2] are non-ideal. How will you explain this?
Ans: H[2] & He are small, non-polar gases. They have very weak London dispersion forces as intermolecular forces. So, they behave ideally at room temperature.
Cl[2] is also a non-polar gas, but due to its large size and high polarizability, strong London dispersion forces exist among its molecules. So, it behaves non-ideally at room temperature.
SO[2] is a polar gas, having large size and high polarizability. It has strong intermolecular forces (both dipole-dipole and London dispersion forces), so it also behaves non-ideally at room
Q.14(a): Derive van der Waal’s equation for real gases.
(b) What is the physical significance of van der Waal’s constants, “a” and “b”? Give their units.
Ans: (a) Consult the textbook article 3.10.2 for complete derivation of van der Waal’s real gas equation.
(b) Physical Significance of van der Waal’s Constants:
Van der Waals’ real gas equation is:
Here ‘a’ and ‘b’ are called van der Waal’s constants.
‘a’ is called co-efficient of attraction or attraction per unit volume. It is the measure of the attraction between the gas molecules. Greater value of a’ indicates greater attraction between gas
molecules, and therefore greater deviation from ideality.
‘b’ is excluded or effective volume. It is the volume of one mole of a gas in highly compressed state but not in liquid state. Greater value of ‘b’ means greater space occupied by the gas molecules,
less space is available for their motion, so greater deviation from ideality.
Q.15: Explain the following facts:
(i) The plot PV versus P is a straight line at constant temperature and with a fixed number of moles of an ideal gas.
Ans: When a graph is plotted between P on x-axis and PV on y-axis, a straight line parallel to the x-axis is obtained. This shows that for every value of P, the product PV remains constant. This is
because of the fact that P and V are inversely proportional to each other. When P is increased, V will decrease in such a way that the product PV or ‘k’ remains constant.
(ii) The straight line in (a) is parallel to pressure axis and goes away from the pressure axis at higher temperatures for many gases.
Ans: When a graph is plotted between P on x-axis and PV on y-axis, a straight line parallel to the x-axis is obtained. When temperature is increased, this straight line for many gases goes away from
pressure axis. This is because with the increase in temperature, the volume increases and the value of product PV also increases at same pressure.
(iii) Pressure of NH[3] gas at a given conditions (say 20 atm pressure and room temperature) is less as calculated by van der Waals equation than that calculated by general gas equation.
At 20 atm pressure and room temperature, strong hydrogen bonding exists between NH[3] molecules, due to which the pressure exerted by gas molecules on the walls of the container decreases. So, the
real pressure of NH[3], calculated by van der Waals’ real gas equation ‘ ’ will be less than the ideal pressure of NH[3], calculated by ideal gas equation, PV=nRT.
(iv) Water vapors do not behave ideally at 273k.
Ans: Water vapours don’t behave ideally at 273 K because this is the freezing point of water. At such low temperature, water molecules in the vapour phase come very close to each other and strong
hydrogen bonding operate among them. Thus, their real pressure becomes less than the ideal pressure, so they do not follow ideal gas equation or PV=nRT.
(v) SO[2] is comparatively non-ideal at 273K but behaves ideally at 373K.
Ans: SO[2] behaves non-ideally at 273 K because at such low temperature, the gas molecules come very close to each other and strong intermolecular forces start operating between them. So, the ideal
gas equation PV=nRT is not obeyed.
SO[2] behaves ideally at 373 K because at this temperature the gas molecules go far away from each other. The attractive forces between them become very weak, and the ideal gas equation, PV=nRT is
Q.16: Helium gas in a 100 cm^3 container at a pressure of 500 torr is transferred to a container with a volume of 250 cm^3. What will be the new pressure?
a) If no change in temperature occurs.
b) If its temperature changes from 20°C to 15°C?
Q.17: a) What are the densities in kg dm^–3 of the following gases at STP (P=101325 Nm^-2, T=273 K, molecular masses are in kg mol^–1)? (i) Methane (ii) Oxygen (iii) Hydrogen
b) Compare the values of densities in proportion to their molar masses.
From the above comparison, it is clear that the greater is the molar mass of a gas, the greater is its density and vice versa. It means density is directly proportional to the molar mass (d=PM/RT).
The densities of these gases decrease in the following order: O[2] > CH[4] > H[2]
c) How do you justify that increase of volume up to 100 dm^3 at 27°C of 2 moles of NH[3] will allow the gas behave ideally, as compared to STP conditions?
2 moles of NH[3] at 27^oC, having volume 100 dm^3 will behave more ideally as compared to same amount of the gas at STP (0^oC & 44.8 dm^3 volume). This is because gases become more ideal at higher
temperatures, when their volumes increase and attractive forces become weaker.
Q.18: A sample of krypton with a volume of 6.25 dm^3, a pressure of 765 torr and a temperature of 20°C is expanded to a volume of 9.55 dm^3, and a pressure of 375 torr. What will be its final
temperature in ^oC?
Q.19: Working at a vacuum line, a chemist isolated a gas in a weighing bulb with a volume of 255 cm^3, at a temperature of 25°C and under a pressure in the blub of 10.0 torr. The gas weighed 12.1 mg.
What is molecular mass of this gas?
Q.20: What pressure is exerted by a mixture of 2.00 g of H[2] and 8.00 g of N[2] at 273 K in a 10 dm^3 vessel?
Q.21: (A) The relative densities of two gases A and B are 1 : 1.5. Find out the volume of B which will diffuse in the same time in which 150 dm^3 of A will diffuse?
(B): Hydrogen (H[2]) diffuses through a porous plate at a rate of 500 cm^3 per minute at 0°C. What is the rate of diffusion of oxygen through the same porous plate at 0°C?
(C): The rate of effusion of an unknown gas A through a pinhole is found to be 0.279 times the rate of effusion of H[2] gas through the same pinhole. Calculate the molecular mass of the unknown gas
at STP.
Q.22: Calculate the number of molecules and the number of atoms in the given amounts of each gas.
(a) 20 cm^3 of CH[4] at 0°C and pressure of 700 mm of mercury
(b) 1 cm^3 of NH[3] at 100°C and the pressure is decreased by 100 torr.
Q.23: Calculate the masses of 10^20 molecules of each of H[2], O[2] and CO[2] at STP. What will happen to the masses of these gases, when the temperature of these gases is increased by 100°C and the
pressure is decreased by 100 torr.
Q.24(A): Two moles of NH[3] are enclosed in a 5 dm^3 flask at 27°C. Calculate pressure exerted by the gas assuming that:
(i) It behaves like an ideal gas.
(ii) It behaves like a real gas.
a = 4.17 atm dm^6 mol^-2
b = 0.0371 dm^3 mol^-1
(B): Also Calculate the amount of pressure lessened due to forces attractions at these conditions of volume and temperature.
Ideal Pressure= 9.852 atm
Real Pressure= 9.34 atm
Pressure lessened = 9.852 – 9.34 = 0.512 atm
(C): Do you expect the same decrease in the pressure of two moles of NH[3] having a volume of 40 dm^3 and at temperature of 27°C.
Hope, you have found the exercise solution satisfying. Give your opinion about our work. Thanks!
|
{"url":"https://exceptionaleducations.com/solved-exercise-chem-11-ch-03/","timestamp":"2024-11-11T18:11:57Z","content_type":"text/html","content_length":"163328","record_id":"<urn:uuid:6343b90f-508c-467f-9910-fa5a1becc86c>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00269.warc.gz"}
|
Introduction to Autoencoders
Alfredo Canziani
Applications of Autoencoder
DALL-E: Creating Images from Text
DALL-E (released by OpenAI) is a neural network based on the Transformers architecture, that creates images from text captions. It is a 12-billion parameter version of GPT-3, trained on a dataset of
text-image pairs.
Figure 1: DALL-E: Input-Output
Go to the website and play with the captions!
Let’s start with some definitions:
$\vx$: is observed during both training and testing
$\vy$: is observed during training but not testing
$\vz$: is not observed (neither during training nor during testing).
$\vh$: is computed from the input (hidden/internal)
$\vytilde$: is computed from the hidden (predicted $\vy$, ~ means circa)
Confused? Refer to the below figure to understand the use of different variables in different machine learning techniques.
Figure 2: Variable definitions in different machine learning techniques
These kinds of networks are used to learn the internal structure of some input and encode it in a hidden internal representation $\vh$, which expresses the input.
We already learned how to train energy-based models, let’s look at the below network:
Figure 3: Autoencoder Architecture
Here instead of computing the minimization of the energy $\red{E}$ for $\vz$, we use an encoder that approximates the minimization and provides a hidden representation $\vh$ for a given $\vy$.
\[\vh = \Enc(\vy)\]
Then the hidden representation is convected into $\vytilde$ (Here we don’t have a predictor, we have an encoder).
\[\vytilde= \Dec (\vh)\]
Basically, $\vh$ is the output of a squashing function $f$ of the rotation of our input/observation $\vy$. $\vytilde$ is the output of squashing function $g$ of the rotation of our hidden
representation $\vh$.
\[\vh = f(\mW{_h} \vy + \vb{_h}) \\ \vytilde = g(\mW{_y}\vh + \vb{_y})\]
Note that, here $\vy$ and $\vytilde$ both belong to the same input space, and $\vh$ belong to $\mathbb{R}^d$ which is the internal representation. $\mW_h$ and $\mW_y$ are matrices for rotation.
\[\vy, \vytilde \in \mathbb{R}^n \\ \vh \in \mathbb{R}^d \\ \mW_h \in \mathbb{R}^{d \times n} \\ \mW_y \in \mathbb{R}^{n \times d}\]
This is called Autoencoder. The encoder is performing amortizing and we don’t have to minimize the energy $\red{E}$ but $\red{F}$:
\[\red{F}(\vy) = \red{C}(\vy,\vytilde) + \red{R}(\vh)\]
Reconstruction Costs
Below are the two examples of reconstruction energies:
Real-Valued Input:
\[\red{C}(\vy,\vytilde) = \Vert{\vy-\vytilde}\Vert^2 = \Vert \vy-\Dec[\Enc(\vy)] \Vert^2\]
This is the square euclidean distance between $\vy$ and $\vytilde$.
Binary input
In the case of binary input, we can simply use binary cross-entropy
\[\red{C}(\vy,\vytilde) = - \sum_{i=1}^n{\vy{_i}\log(\vytilde{_i}) + (1-\vy{_i})\log(1-\vytilde{_i})}\]
Loss Functionals
Average across all training samples of per sample loss function
\[\mathcal{L}(\red{F}(\cdot),\mY) = \frac{1}{m}\sum_{j=1}^m{\ell(\red{F}(\cdot),\vy^{(j)})} \in \mathbb{R}\]
We take the energy loss and try to push the energy down on $\vytilde$
\[\ell_{\text{energy}}(\red{F}(\cdot),\vy) = \red{F}(\vy)\]
The size of the hidden representation $\vh$ obtained using these networks can be both smaller and larger than the input size.
If we choose a smaller $\vh$, the network can be used for non-linear dimensionality reduction.
In some situations it can be useful to have a larger than input $\vh$, however, in this scenario, a plain autoencoder would collapse. In other words, since we are trying to reconstruct the input, the
model is prone to copying all the input features into the hidden layer and passing it as the output thus essentially behaving as an identity function. This needs to be avoided as this would imply
that our model fails to learn anything.
To prevent the model from collapsing, we have to employ techniques that constrain the amount of region which can take zero or low energy values. These techniques can be some sort of regularization
such as sparsity constraints, adding additional noise, or sampling.
Denoising autoencoder
We add some augmentation/corruption like Gaussian noise to an input sampled from the training manifold $\vyhat$ before feeding it into the model and expect the reconstructed input $\vytilde$ to be
similar to the original input $\vy$.
Figure 4: Denoising Autoencoder Network architecture.
An important note: The noise added to the original input should be similar to what we expect in reality, so the model can easily recover from it.
Figure 5: Measuring the traveling distance of the input data
In the image above, the light colour points on the spiral represent the original data manifold. As we add noise, we go farther from the original points. These noise-added points are fed into the
auto-encoder to generate this graph. The direction of each arrow points to the original datapoint the model pushes the noise-added point towards; whereas the size of the arrow shows by how much. We
also see a dark purple spiral region which exists because the points in this region are equidistant from two points on the data manifold.
Vidit Bhargava, Monika Dagar 18 March 2021
|
{"url":"https://atcold.github.io/NYU-DLSP21/en/week07/07-3/","timestamp":"2024-11-11T20:05:13Z","content_type":"text/html","content_length":"31711","record_id":"<urn:uuid:ec67b2ef-c333-4a39-89a0-a2093919d984>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00277.warc.gz"}
|
How do you use the method of cylindrical shells to find the volume of the solid obtained by rotating the region bounded by y=f(x)=3x-x^2 and x axis revolved about the x=-1? | Socratic
How do you use the method of cylindrical shells to find the volume of the solid obtained by rotating the region bounded by #y=f(x)=3x-x^2# and x axis revolved about the x=-1?
1 Answer
$V = \frac{45 \pi}{2} u n i t {s}^{3}$
Imagine a cylindrical shell as a rectangular prism with width $f \left(x\right)$, length $2 \pi r$, and thickness $\delta x$. The reason for the length being $2 \pi r$ is that if we unravel a
cylindrical shell into a rectangular prism, the length corresponds to the circumference of the circular cross-section of the original cylinder.
The volume of the cylindrical shell is width x length x thickness (height)
Now width $= f \left(x\right) = 3 x - {x}^{2}$
$\therefore$ Volume of shell$= 2 \pi r \left(3 x - {x}^{2}\right) \delta x$
If we sketch the parabola $y = 3 x - {x}^{2}$ we can see that the region bound by this parabola and the x-axis resides in the domain $0 \le x \le 3$
Also, the radius of the cylindrical shell is considered to be the distance from the axis of revolution $x = - 1$ and the edge of the shell (i.e. at a position $x$ within the domain $0 \le x \le 3$)
Hence, r$= 1 + x$
$\therefore$Volume of shell$= 2 \pi \left(1 + x\right) \left(3 x - {x}^{2}\right) \delta x$
i.e. the change in volume, $\delta v = 2 \pi \left(1 + x\right) \left(3 x - {x}^{2}\right) \delta x$
To find the volume, we limit the thickness of the shells, and find their summation within the domain $0 \le x \le 3$
$V = {\lim}_{\delta x \to 0} {\sum}_{x = 0}^{3} \left(2 \pi \left(1 + x\right) \left(3 x - {x}^{2}\right) \delta x\right)$
$= 2 \pi {\int}_{0}^{3} \left(1 + x\right) \left(3 x - {x}^{2}\right) \mathrm{dx}$
$= 2 \pi {\int}_{0}^{3} \left(3 x + 2 {x}^{2} - {x}^{3}\right) \mathrm{dx}$
$= 2 \pi {\left[\frac{3}{2} {x}^{2} + \frac{2}{3} {x}^{3} - \frac{1}{4} {x}^{4}\right]}_{0}^{3}$
$= 2 \pi \left[\frac{45}{4}\right]$
$= \frac{45 \pi}{2} u n i t {s}^{3}$
Impact of this question
3082 views around the world
|
{"url":"https://api-project-1022638073839.appspot.com/questions/how-do-you-use-the-method-of-cylindrical-shells-to-find-the-volume-of-the-solid--15#503899","timestamp":"2024-11-01T20:48:24Z","content_type":"text/html","content_length":"37176","record_id":"<urn:uuid:46e22914-29ef-4fc6-99a7-9ab072e7ca57>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00331.warc.gz"}
|
Design Kalman
Design Kalman filter for state estimation
[kalmf,L,P] = kalman(sys,Q,R,N) creates a Kalman filter given the plant model sys and the noise covariance data Q, R, and N. The function computes a Kalman filter for use in a Kalman estimator with
the configuration shown in the following diagram.
You construct the model sys with known inputs u and white process noise inputs w, such that w consists of the last N[w] inputs to sys. The "true" plant output y[t] consists of all outputs of sys. You
also provide the noise covariance data Q, R, and N. The returned Kalman filter kalmf is a state-space model that takes the known inputs u and the noisy measurements y and produces an estimate $\
stackrel{^}{y}$ of the true plant output and an estimate $\stackrel{^}{x}$ of the plant states. kalman also returns the Kalman gains L and the steady-state error covariance matrix P.
[kalmf,L,P] = kalman(sys,Q,R,N,sensors,known) computes a Kalman filter when one or both of the following conditions exist.
• Not all outputs of sys are measured.
• The disturbance inputs w are not the last inputs of sys.
The index vector sensors specifies which outputs of sys are measured. These outputs make up y. The index vector known specifies which inputs are known (deterministic). The known inputs make up u. The
kalman command takes the remaining inputs of sys to be the stochastic inputs w.
[kalmf,L,P,Mx,Z,My] = kalman(___) also returns the innovation gains Mx and My and the steady-state error covariances P and Z for a discrete-time sys. You can use this syntax with any of the previous
input argument combinations.
[kalmf,L,P,Mx,Z,My] = kalman(___,type) specifies the estimator type for a discrete-time sys.
• type = 'current' — Compute output estimates $\stackrel{^}{y}\left[n|n\right]$ and state estimates $\stackrel{^}{x}\left[n|n\right]$ using all available measurements up to $y\left[n\right]$.
• type = 'delayed' — Compute output estimates $\stackrel{^}{y}\left[n|n-1\right]$ and state estimates $\stackrel{^}{x}\left[n|n-1\right]$ using measurements only up to $y\left[n-1\right]$. The
delayed estimator is easier to implement inside control loops.
You can use the type input argument with any of the previous input argument combinations.
Design Kalman Filter for SISO Plant
Design a Kalman filter for a plant that has additive white noise w on the input and v on the output, as shown in the following diagram.
Assume that the plant has the following state-space matrices and is a discrete-time plant with an unspecified sample time (Ts = -1).
A = [1.1269 -0.4940 0.1129
1.0000 0 0
0 1.0000 0];
B = [-0.3832
C = [1 0 0];
D = 0;
Plant = ss(A,B,C,D,-1);
Plant.InputName = 'un';
Plant.OutputName = 'yt';
To use kalman, you must provide a model sys that has an input for the noise w. Thus, sys is not the same as Plant, because Plant takes the input un = u + w. You can construct sys by creating a
summing junction for the noise input.
Sum = sumblk('un = u + w');
sys = connect(Plant,Sum,{'u','w'},'yt');
Equivalently, you can use sys = Plant*[1 1].
Specify the noise covariances. Because the plant has one noise input and one output, these values are scalar. In practice, these values are properties of the noise sources in your system, which you
determine by measurement or other knowledge of your system. For this example, assume both noise sources have unit covariance and are not correlated (N = 0).
Design the filter.
[kalmf,L,P] = kalman(sys,Q,R,N);
State-space model with 4 outputs, 2 inputs, and 3 states.
The Kalman filter kalmf is a state-space model having two inputs and four outputs. kalmf takes as inputs the plant input signal u and the noisy plant output $y={y}_{t}+v$. The first output is the
estimated true plant output $\underset{}{\overset{ˆ}{y}}$. The remaining three outputs are the state estimates $\underset{}{\overset{ˆ}{x}}$. Examine the input and output names of kalmf to see how
kalman labels them accordingly.
ans = 2x1 cell
{'u' }
ans = 4x1 cell
Examine the Kalman gains L. For a SISO plant with three states, L is a three-element column vector.
L = 3×1
For an example that shows how to use kalmf to reduce measurement error due to noise, see Kalman Filtering.
Design Kalman Filter for MIMO Plant
Consider a plant with three inputs, one of which represents process noise w, and two measured outputs. The plant has four states.
Assuming the following state-space matrices, create sys.
A = [-0.71 0.06 -0.19 -0.17;
0.06 -0.52 -0.03 0.30;
-0.19 -0.03 -0.24 -0.02;
-0.17 0.30 -0.02 -0.41];
B = [ 1.44 2.91 0;
-1.97 0.83 -0.27;
-0.20 1.39 1.10;
-1.2 0 -0.28];
C = [ 0 -0.36 -1.58 0.28;
-2.05 0 0.51 0.03];
D = zeros(2,3);
sys = ss(A,B,C,D);
sys.InputName = {'u1','u2','w'};
sys.OutputName = {'y1','y2'};
Because the plant has only one process noise input, the covariance Q is a scalar. For this example, assume the process noise has unit covariance.
kalman uses the dimensions of Q to determine which inputs are known and which are the noise inputs. For scalar Q, kalman assumes one noise input and uses the last input, unless you specify otherwise
(see Plant with Unmeasured Outputs).
For the measurement noise on the two outputs, specify a 2-by-2 noise covariance matrix. For this example, use a unit variance for the first output, and variance of 1.3 for the second output. Set the
off-diagonal values to zero to indicate that the two noise channels are uncorrelated.
Design the Kalman filter.
[kalmf,L,P] = kalman(sys,Q,R);
Examine the inputs and outputs. kalman uses the InputName, OutputName, InputGroup, and OutputGroup properties of kalmf to help you keep track of what the inputs and outputs of kalmf represent.
ans = struct with fields:
KnownInput: [1 2]
Measurement: [3 4]
ans = 4x1 cell
ans = struct with fields:
OutputEstimate: [1 2]
StateEstimate: [3 4 5 6]
ans = 6x1 cell
Thus the two known inputs u1 and u2 are the first two inputs of kalmf and the two measured outputs y1 and y2 are the last two inputs to kalmf. For the outputs of kalmf, the first two are the
estimated outputs, and the remaining four are the state estimates. To use the Kalman filter, connect these inputs to the plant and noise signals in a manner analogous to that shown for a SISO plant
in Kalman Filtering.
Plant with Unmeasured Outputs
Consider a plant with four inputs and two outputs. The first and third inputs are known, while the second and fourth inputs represent the process noise. The plant also has two outputs, but only the
second of them is measured.
Use the following state-space matrices to create sys.
A = [-0.37 0.14 -0.01 0.04;
0.14 -1.89 0.98 -0.11;
-0.01 0.98 -0.96 -0.14;
0.04 -0.11 -0.14 -0.95];
B = [-0.07 -2.32 0.68 0.10;
-2.49 0.08 0 0.83;
0 -0.95 0 0.54;
-2.19 0.41 0.45 0.90];
C = [ 0 0 -0.50 -0.38;
-0.15 -2.12 -1.27 0.65];
D = zeros(2,4);
sys = ss(A,B,C,D,-1); % Discrete with unspecified sample time
sys.InputName = {'u1','w1','u2','w2'};
sys.OutputName = {'yun','ym'};
To use kalman to design a filter for this system, use the known and sensors input arguments to specify which inputs to the plant are known and which output is measured.
known = [1 3];
sensors = [2];
Specify the noise covariances and design the filter.
Q = eye(2);
R = 1;
N = 0;
[kalmf,L,P] = kalman(sys,Q,R,N,sensors,known);
Examining the input and output labels of kalmf shows the inputs that the filter expects and the outputs it returns.
ans = struct with fields:
KnownInput: [1 2]
Measurement: 3
ans = 3x1 cell
kalmf takes as inputs the two known inputs of sys and the noisy measured outputs of sys.
ans = struct with fields:
OutputEstimate: 1
StateEstimate: [2 3 4 5]
The first output of kalmf is its estimate of the true value of the measured plant output. The remaining outputs are the state estimates.
Input Arguments
sys — Plant model with process noise
ss model
Plant model with process noise, specified as a state-space (ss) model. The plant has known inputs u and white process noise inputs w. The plant output y[t] does not include the measurement noise.
You can write the state-space matrices of such a plant model as:
kalman assumes the Gaussian noise v on the output. Thus, in continuous time, the state-space equations that kalman works with are:
$\begin{array}{l}\stackrel{˙}{x}=Ax+Bu+Gw\text{ }\text{ }\\ y=Cx+Du+Hw+v\end{array}$
In discrete time, the state-space equations are:
$\begin{array}{l}x\left[n+1\right]=Ax\left[n\right]+Bu\left[n\right]+Gw\left[n\right]\\ y\left[n\right]=Cx\left[n\right]+Du\left[n\right]+Hw\left[n\right]+v\left[n\right]\end{array}$
If you do not use the known input argument, kalman uses the size of Q to determine which inputs of sys are noise inputs. In this case, kalman treats the last N[w] = size(Q,1) inputs as the noise
inputs. When the noise inputs w are not the last inputs of sys, you can use the known input argument to specify which plant inputs are known. kalman treats the remaining inputs as stochastic.
For additional constraints on the properties of the plant matrices, see Limitations.
Q — Process noise covariance
scalar | matrix
Process noise covariance, specified as a scalar or N[w]-by-N[w] matrix, where N[w] is the number of noise inputs to the plant. kalman uses the size of Q to determine which inputs of sys are noise
inputs, taking the last N[w] = size(Q,1) inputs to be the noise inputs unless you specify otherwise with the known input argument.
kalman assumes that the process noise w is Gaussian noise with covariance Q = E(ww^T). When the plant has only one process noise input, Q is a scalar equal to the variance of w. When the plant has
multiple, uncorrelated noise inputs, Q is a diagonal matrix. In practice, you determine the appropriate values for Q by measuring or making educated guesses about the noise properties of your system.
R — Measurement noise covariance
scalar | matrix
Measurement noise covariance, specified as a scalar or N[y]-by-N[y] matrix, where N[y] is the number of plant outputs. kalman assumes that the measurement noise v is white noise with covariance R = E
(vv^T). When the plant has only one output channel, R is a scalar equal to the variance of v. When the plant has multiple output channels with uncorrelated measurement noise, R is a diagonal matrix.
In practice, you determine the appropriate values for R by measuring or making educated guesses about the noise properties of your system.
For additional constraints on the measurement noise covariance, see Limitations.
N — Noise cross covariance
0 (default) | scalar | matrix
Noise cross covariance, specified as a scalar or N[w]-by-N[y] matrix. kalman assumes that the process noise w and the measurement noise v satisfy N = E(wv^T). If the two noise sources are not
correlated, you can omit N, which is equivalent to setting N = 0. In practice, you determine the appropriate values for N by measuring or making educated guesses about the noise properties of your
sensors — Measured outputs of sys
Measured outputs of sys, specified as a vector of indices identifying which outputs of sys are measured. For instance, suppose that your system has three outputs, but only two of them are measured,
corresponding to the first and third outputs of sys. In this case, set sensors = [1 3].
known — Known inputs of sys
Known inputs of sys, specified as a vector of indices identifying which inputs are known (deterministic). For instance, suppose that your system has three inputs, but only the first and second inputs
are known. In this case, set known = [1 2]. kalman interprets any remaining inputs of sys to be stochastic.
type — Type of discrete-time estimator
'current' (default) | 'delayed'
Type of discrete-time estimator to compute, specified as either 'current' or 'delayed'. This input is relevant only for discrete-time sys.
• 'current' — Compute output estimates $\stackrel{^}{y}\left[n|n\right]$ and state estimates $\stackrel{^}{x}\left[n|n\right]$ using all available measurements up to $y\left[n\right]$.
• 'delayed' — Compute output estimates $\stackrel{^}{y}\left[n|n-1\right]$ and state estimates $\stackrel{^}{x}\left[n|n-1\right]$ using measurements only up to $y\left[n-1\right]$. The delayed
estimator is easier to implement inside control loops.
For details about how kalman computes the current and delayed estimates, see Discrete-Time Estimation.
Output Arguments
kalmf — Kalman estimator
ss model
Kalman estimator or kalman filter, returned as a state-space (ss) model. The resulting estimator has inputs $\left[u;y\right]$ and outputs $\left[\stackrel{^}{y};\stackrel{^}{x}\right]$. In other
words, kalmf takes as inputs the plant input u and the noisy plant output y, and produces as outputs the estimated noise-free plant output $\stackrel{^}{y}$ and the estimated state values $\stackrel
kalman automatically sets the InputName, OutputName, InputGroup, and OutputGroup properties of kalmf to help you keep track of which inputs and outputs correspond to which signals.
For the state and output equations of kalmf, see Continuous-Time Estimation and Discrete-Time Estimation.
L — Filter gains
Filter gains, returned as an array of size N[x]-by-N[y], where N[x] is the number of states in the plant and N[y] is the number of plant outputs. In continuous time, the state equation of the Kalman
filter is:
In discrete time, the state equation is:
For details about these expressions and how L is computed, see Continuous-Time Estimation and Discrete-Time Estimation.
P, Z — Steady-state error covariances
Steady-state error covariances, returned as N[x]-by-N[x], where N[x] is the number of states in the plant. The Kalman filter computes state estimates that minimize P. In continuous time, the
steady-state error covariance is given by:
$P=\underset{t\to \infty }{\mathrm{lim}}E\left(\left\{x-\stackrel{^}{x}\right\}{\left\{x-\stackrel{^}{x}\right\}}^{T}\right).$
In discrete time, the steady-state error covariances are given by:
$\begin{array}{l}P=\underset{n\to \infty }{\mathrm{lim}}E\left(\left\{x\left[n\right]-\stackrel{^}{x}\left[n|n-1\right]\right\}{\left\{x\left[n\right]-\stackrel{^}{x}\left[n|n-1\right]\right\}}^{T}\
right),\\ Z=\underset{n\to \infty }{\mathrm{lim}}E\left(\left\{x\left[n\right]-\stackrel{^}{x}\left[n|n\right]\right\}{\left\{x\left[n\right]-\stackrel{^}{x}\left[n|n\right]\right\}}^{T}\right).\end
For further details about these quantities and how kalman uses them, see Continuous-Time Estimation and Discrete-Time Estimation.
Mx, My — Innovation gains of state estimators
Innovation gains of the state estimators for discrete-time systems, returned as an array.
Mx and My are relevant only when type = 'current', which is the default estimator for discrete-time systems. For continuous-time sys or type = 'delayed', then Mx = My = [].
For the 'current' type estimator, Mx and My are the innovation gains in the update equations:
When there is no direct feedthrough from the noise input w to the plant output y (that is, when H = 0, see Discrete-Time Estimation), then ${M}_{y}=C{M}_{x}$, and the output estimate simplifies to $\
The dimensions of the arrays Mx and My depend on the dimensions of sys as follows.
• Mx — N[x]-by-N[y], where N[x] is the number of states in the plant and N[y] is the number of outputs.
• My — N[y]-by-N[y].
For details about how kalman obtains Mx and My, see Discrete-Time Estimation.
The plant and noise data must satisfy:
• (C,A) is detectable, where:
• $\overline{R}>0$ and $\left[\begin{array}{cc}\overline{Q}& \overline{N};\text{\hspace{0.17em}}\text{\hspace{0.17em}}\begin{array}{cc}{\overline{N}}^{\prime }& \overline{R}\end{array}\end{array}\
right]\ge 0$, where
$\left[\begin{array}{cc}\overline{Q}& \overline{N}\\ {\overline{N}}^{\prime }& \overline{R}\end{array}\right]=\left[\begin{array}{cc}G& 0\\ H& I\end{array}\right]\left[\begin{array}{cc}Q& N\\ {N}
^{\prime }& R\end{array}\right]{\left[\begin{array}{cc}G& 0\\ H& I\end{array}\right]}^{\prime }.$
• $\left(A-\overline{N}{\overline{R}}^{-1}C,\overline{Q}-\overline{N}{\overline{R}}^{-1}{\overline{N}}^{T}\right)$ has no uncontrollable mode on the imaginary axis in continuous time, or on the
unit circle in discrete time.
Continuous-Time Estimation
Consider a continuous-time plant with known inputs u, white process noise w, and white measurement noise v:
$\begin{array}{l}\stackrel{˙}{x}=Ax+Bu+Gw\text{ }\text{ }\\ y=Cx+Du+Hw+v\end{array}$
The noise signals w and v satisfy:
$E\left(w\right)=E\left(v\right)=0,\text{ }E\left(w{w}^{T}\right)=Q,\text{ }E\left(v{v}^{T}\right)=R,\text{ }E\left(w{v}^{T}\right)=N$
The Kalman filter, or Kalman estimator, computes a state estimate $\stackrel{^}{x}\left(t\right)$ that minimizes the steady-state error covariance:
$P=\underset{t\to \infty }{\mathrm{lim}}E\left(\left\{x-\stackrel{^}{x}\right\}{\left\{x-\stackrel{^}{x}\right\}}^{T}\right).$
The Kalman filter has the following state and output equations:
$\begin{array}{l}\frac{d\stackrel{^}{x}}{dt}=A\stackrel{^}{x}+Bu+L\left(y-C\stackrel{^}{x}-Du\right)\\ \left[\begin{array}{c}\stackrel{^}{y}\\ \stackrel{^}{x}\end{array}\right]=\left[\begin{array}{c}
C\\ I\end{array}\right]\stackrel{^}{x}+\left[\begin{array}{c}D\\ 0\end{array}\right]u\end{array}$
To obtain the filter gain L, kalman solves an algebraic Riccati equation to obtain
$\begin{array}{l}\overline{R}=R+HN+{N}^{T}{H}^{T}+HQ{H}^{T}\\ \overline{N}=G\left(Q{H}^{T}+N\right)\end{array}$
P solves the corresponding algebraic Riccati equation.
The estimator uses the known inputs u and the measurements y to generate the output and state estimates $\stackrel{^}{y}$ and $\stackrel{^}{x}$.
Discrete-Time Estimation
The discrete plant is given by:
$\begin{array}{l}x\left[n+1\right]=Ax\left[n\right]+Bu\left[n\right]+Gw\left[n\right]\\ y\left[n\right]=Cx\left[n\right]+Du\left[n\right]+Hw\left[n\right]+v\left[n\right]\end{array}$
In discrete time, the noise signals w and v satisfy:
$E\left(w\left[n\right]w{\left[n\right]}^{T}\right)=Q,\text{ }E\left(v\left[n\right]v{\left[n\right]}^{T}\right)=R,\text{ }E\left(w\left[n\right]v{\left[n\right]}^{T}\right)=N$
The discrete-time estimator has the following state equation:
kalman solves a discrete Riccati equation to obtain the gain matrix L:
$\begin{array}{l}\overline{R}=R+HN+{N}^{T}{H}^{T}+HQ{H}^{T}\\ \overline{N}=G\left(Q{H}^{T}+N\right)\end{array}$
kalman can compute two variants of the discrete-time Kalman estimator, the current estimator (type = 'current') and the delayed estimator (type = 'delayed').
• Current estimator — Generates output estimates $\stackrel{^}{y}\left[n|n\right]$ and state estimates $\stackrel{^}{x}\left[n|n\right]$ using all available measurements up to $y\left[n\right]$.
This estimator has the output equation
$\left[\begin{array}{c}\stackrel{^}{y}\left[n|n\right]\\ \stackrel{^}{x}\left[n|n\right]\end{array}\right]=\left[\begin{array}{c}\left(I-{M}_{y}\right)C\\ I-{M}_{x}C\end{array}\right]\stackrel{^}
{x}\left[n|n-1\right]+\left[\begin{array}{cc}\left(I-{M}_{y}\right)D& {M}_{y}\\ -{M}_{x}D& {M}_{x}\end{array}\right]\left[\begin{array}{c}u\left[n\right]\\ y\left[n\right]\end{array}\right].$
where the innovation gains M[x] and M[y] are defined as:
$\begin{array}{c}{M}_{x}=P{C}^{T}{\left(CP{C}^{T}+\overline{R}\right)}^{-1},\\ {M}_{y}=\left(CP{C}^{T}+HQ{H}^{T}+HN\right){\left(CP{C}^{T}+\overline{R}\right)}^{-1}.\end{array}$
Thus, M[x] updates the state estimate $\stackrel{^}{x}\left[n|n-1\right]$ using the new measurement $y\left[n\right]$:
Similarly, M[y] computes the updated output estimate:
When H = 0, then ${M}_{y}=C{M}_{x}$, and the output estimate simplifies to $\stackrel{^}{y}\left[n|n\right]=C\stackrel{^}{x}\left[n|n\right]+Du\left[n\right]$.
• Delayed estimator — Generates output estimates $\stackrel{^}{y}\left[n|n-1\right]$ and state estimates $\stackrel{^}{x}\left[n|n-1\right]$ using measurements only up to y[v][n–1]. This estimator
has the output equation:
$\left[\begin{array}{c}\stackrel{^}{y}\left[n|n-1\right]\\ \stackrel{^}{x}\left[n|n-1\right]\end{array}\right]=\left[\begin{array}{c}C\\ I\end{array}\right]\stackrel{^}{x}\left[n|n-1\right]+\left
[\begin{array}{cc}D& 0\\ 0& 0\end{array}\right]\left[\begin{array}{c}u\left[n\right]\\ y\left[n\right]\end{array}\right]$
The delayed estimator is easier to deploy inside control loops.
Version History
Introduced before R2006a
See Also
|
{"url":"https://se.mathworks.com/help/control/ref/ss.kalman.html","timestamp":"2024-11-11T05:20:15Z","content_type":"text/html","content_length":"171742","record_id":"<urn:uuid:5becb5ba-dbac-45f0-888c-ff226a9df2b2>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00613.warc.gz"}
|
Secret sharing and non-Shannon information inequalities
The known secret-sharing schemes for most access structures are not efficient; even for a one-bit secret the length of the shares in the schemes is 2^O(n) , where n is the number of participants in
the access structure. It is a long standing open problem to improve these schemes or prove that they cannot be improved. The best known lower bound is by Csirmaz, who proved that there exist access
structures with n participants such that the size of the share of at least one party is n/log n times the secret size. Csirmaz's proof uses Shannon information inequalities, which were the only
information inequalities known when Csirmaz published his result. On the negative side, Csirmaz proved that by only using Shannon information inequalities one cannot prove a lower bound of ω(n) on
the share size. In the last decade, a sequence of non-Shannon information inequalities were discovered. In fact, it was proved that there are infinity many independent information inequalities even
in four variables. This raises the hope that these inequalities can help in improving the lower bounds beyond n. However, we show that any information inequality with four or five variables cannot
prove a lower bound of ω(n) on the share size. In addition, we show that the same negative result holds for all information inequalities with more than five variables that are known to date.
• Linear programs
• lower bounds
• monotone span programs
• non-Shannon information inequalities
• rank inequalities
• secret-sharing
ASJC Scopus subject areas
• Information Systems
• Computer Science Applications
• Library and Information Sciences
Dive into the research topics of 'Secret sharing and non-Shannon information inequalities'. Together they form a unique fingerprint.
|
{"url":"https://cris.bgu.ac.il/en/publications/secret-sharing-and-non-shannon-information-inequalities-5","timestamp":"2024-11-09T13:03:12Z","content_type":"text/html","content_length":"59473","record_id":"<urn:uuid:67fe1f28-0812-43f9-a37c-c30db0e502df>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00335.warc.gz"}
|
Access Array Elements
In both lists and arrays, elements are accessed using square brackets. Let's review the distinction between indexing and slicing:
• To retrieve a single element, you simply need to specify the index of that element in square brackets (start counting from 0);
• If you want to obtain a sequence from the original array, you should use slices.
We'll start with simple indexing. Let's have a look at the following image:
Let's see how it works with examples.
Get the first element from the following array:
import numpy as np
# Creating array
arr = np.array([1, 2, 3, 4, 5])
# Get the first element
Retrieve the second element from the following array:
import numpy as np
# Creating array
arr = np.array([1, 2, 3, 4, 5])
# Get the second element
Retrieve the third and fourth elements from the following array and then add them together:
import numpy as np
# Creating array
arr = np.array([1, 2, 3, 4, 5])
# Adding the third and the fourth elements
print(arr[2] + arr[3])
Now, it's time to explore slicing. First, let's examine the syntax of slicing: array[start:end:step], where:
• start is the index from which slicing begins;
• end is the index where slicing stops (note that this index is not included);
• step is the parameter that specifies the intervals between the indices.
Let's have a look at the following image:
Omitting start, end and step
As you can see, we can often omit the start, end, step or even all of them at the same time. step, for example, can be omitted when we want it to be equal to 1. start and end can be omitted in the
following scenarios:
1. Omitting start:
□ slicing from the first element (step is positive);
□ slicing from the last element (step is negative).
2. Omitting end:
□ slicing to the last element inclusive (step is positive);
□ slicing to the first element inclusive (step is negative).
In the example above, a[2:4] has the step equal to 1. a[-2:] goes from the second to last element to the end of the array with step equal to 1. a[::2] goes from the first element to the end of the
array with step equal to 2.
Retrieve the first and last elements from the following array [13, 99, 11, 23, 5, 41] and then multiply them. Please use positive indexing.
Switch to desktop for real-world practiceContinue from where you are using one of the options below
Thanks for your feedback!
In both lists and arrays, elements are accessed using square brackets. Let's review the distinction between indexing and slicing:
• To retrieve a single element, you simply need to specify the index of that element in square brackets (start counting from 0);
• If you want to obtain a sequence from the original array, you should use slices.
We'll start with simple indexing. Let's have a look at the following image:
Let's see how it works with examples.
Get the first element from the following array:
import numpy as np
# Creating array
arr = np.array([1, 2, 3, 4, 5])
# Get the first element
Retrieve the second element from the following array:
import numpy as np
# Creating array
arr = np.array([1, 2, 3, 4, 5])
# Get the second element
Retrieve the third and fourth elements from the following array and then add them together:
import numpy as np
# Creating array
arr = np.array([1, 2, 3, 4, 5])
# Adding the third and the fourth elements
print(arr[2] + arr[3])
Now, it's time to explore slicing. First, let's examine the syntax of slicing: array[start:end:step], where:
• start is the index from which slicing begins;
• end is the index where slicing stops (note that this index is not included);
• step is the parameter that specifies the intervals between the indices.
Let's have a look at the following image:
Omitting start, end and step
As you can see, we can often omit the start, end, step or even all of them at the same time. step, for example, can be omitted when we want it to be equal to 1. start and end can be omitted in the
following scenarios:
1. Omitting start:
□ slicing from the first element (step is positive);
□ slicing from the last element (step is negative).
2. Omitting end:
□ slicing to the last element inclusive (step is positive);
□ slicing to the first element inclusive (step is negative).
In the example above, a[2:4] has the step equal to 1. a[-2:] goes from the second to last element to the end of the array with step equal to 1. a[::2] goes from the first element to the end of the
array with step equal to 2.
Retrieve the first and last elements from the following array [13, 99, 11, 23, 5, 41] and then multiply them. Please use positive indexing.
Switch to desktop for real-world practiceContinue from where you are using one of the options below
Thanks for your feedback!
In both lists and arrays, elements are accessed using square brackets. Let's review the distinction between indexing and slicing:
• To retrieve a single element, you simply need to specify the index of that element in square brackets (start counting from 0);
• If you want to obtain a sequence from the original array, you should use slices.
We'll start with simple indexing. Let's have a look at the following image:
Let's see how it works with examples.
Get the first element from the following array:
import numpy as np
# Creating array
arr = np.array([1, 2, 3, 4, 5])
# Get the first element
Retrieve the second element from the following array:
import numpy as np
# Creating array
arr = np.array([1, 2, 3, 4, 5])
# Get the second element
Retrieve the third and fourth elements from the following array and then add them together:
import numpy as np
# Creating array
arr = np.array([1, 2, 3, 4, 5])
# Adding the third and the fourth elements
print(arr[2] + arr[3])
Now, it's time to explore slicing. First, let's examine the syntax of slicing: array[start:end:step], where:
• start is the index from which slicing begins;
• end is the index where slicing stops (note that this index is not included);
• step is the parameter that specifies the intervals between the indices.
Let's have a look at the following image:
Omitting start, end and step
As you can see, we can often omit the start, end, step or even all of them at the same time. step, for example, can be omitted when we want it to be equal to 1. start and end can be omitted in the
following scenarios:
1. Omitting start:
□ slicing from the first element (step is positive);
□ slicing from the last element (step is negative).
2. Omitting end:
□ slicing to the last element inclusive (step is positive);
□ slicing to the first element inclusive (step is negative).
In the example above, a[2:4] has the step equal to 1. a[-2:] goes from the second to last element to the end of the array with step equal to 1. a[::2] goes from the first element to the end of the
array with step equal to 2.
Retrieve the first and last elements from the following array [13, 99, 11, 23, 5, 41] and then multiply them. Please use positive indexing.
Switch to desktop for real-world practiceContinue from where you are using one of the options below
Thanks for your feedback!
In both lists and arrays, elements are accessed using square brackets. Let's review the distinction between indexing and slicing:
• To retrieve a single element, you simply need to specify the index of that element in square brackets (start counting from 0);
• If you want to obtain a sequence from the original array, you should use slices.
We'll start with simple indexing. Let's have a look at the following image:
Let's see how it works with examples.
Get the first element from the following array:
import numpy as np
# Creating array
arr = np.array([1, 2, 3, 4, 5])
# Get the first element
Retrieve the second element from the following array:
import numpy as np
# Creating array
arr = np.array([1, 2, 3, 4, 5])
# Get the second element
Retrieve the third and fourth elements from the following array and then add them together:
import numpy as np
# Creating array
arr = np.array([1, 2, 3, 4, 5])
# Adding the third and the fourth elements
print(arr[2] + arr[3])
Now, it's time to explore slicing. First, let's examine the syntax of slicing: array[start:end:step], where:
• start is the index from which slicing begins;
• end is the index where slicing stops (note that this index is not included);
• step is the parameter that specifies the intervals between the indices.
Let's have a look at the following image:
Omitting start, end and step
As you can see, we can often omit the start, end, step or even all of them at the same time. step, for example, can be omitted when we want it to be equal to 1. start and end can be omitted in the
following scenarios:
1. Omitting start:
□ slicing from the first element (step is positive);
□ slicing from the last element (step is negative).
2. Omitting end:
□ slicing to the last element inclusive (step is positive);
□ slicing to the first element inclusive (step is negative).
In the example above, a[2:4] has the step equal to 1. a[-2:] goes from the second to last element to the end of the array with step equal to 1. a[::2] goes from the first element to the end of the
array with step equal to 2.
Retrieve the first and last elements from the following array [13, 99, 11, 23, 5, 41] and then multiply them. Please use positive indexing.
Switch to desktop for real-world practiceContinue from where you are using one of the options below
Switch to desktop for real-world practiceContinue from where you are using one of the options below
|
{"url":"https://codefinity.com/courses/v2/671389bc-34ed-4de7-83cd-2d1bfcf00a76/e20b3ab2-78bb-4216-94c0-98814544963a/468c6c7f-39de-4746-9594-23e27a8b7a7d","timestamp":"2024-11-06T15:09:13Z","content_type":"text/html","content_length":"488533","record_id":"<urn:uuid:ee066631-b5f1-43fc-bdbb-40e9fc1244ba>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00771.warc.gz"}
|
Juan M. Alonso: Graphs associated to finite metric spaces | KTH
Juan M. Alonso: Graphs associated to finite metric spaces
Time: Tue 2022-04-12 10.15
Location: KTH, 3721, Lindstedtsvägen 25, and Zoom
Video link: Meeting ID: 659 3743 5667
Participating: Juan M. Alonso ((BIOS) IMASL - CONICET and Universidad Nacional de San Luis)
Many concrete problems are formulated in terms of a finite set of points in some \(\mathbb{R}^N\) which, via the ambient Euclidean metric, becomes a finite metric space \((M,d)\). This situation
arises, for instance, when studying the glass transition from simulations.
To obtain information from M it is not always possible to use your favorite mathematical tool directly on M. The “favorite tool” in my case is finite dimension which, although defined on finite
metric spaces, is not effectively computable when M is large and unstructured. In such cases, a useful alternative is to associate a graph to M, and do mathematics directly on the graph, rather than
on the space. One should think of this graph as an approximation to M.
Among the many graphs that can be associated to M, I first considered \({MC} = {MC}(M)\), the Minimum Connected graph, a version — adapted to our situation — of the Vietoris complex of a metric
space. Unfortunately MC is usually a rather dense graph. I then introduce \(CS = CS(M)\), the Connected Sparse graph, a streamlined version of MC. CS encodes the local information of M; in fact, it
is almost a definition of what local structure of M means.
Despite its name, CS can be dense, even a complete graph. However, in our application to glass, we computed CS for more than 700 spaces with about 2200 points each. All of them turned out to be
To understand this “coincidence”, I considered \(\mathfrak{M}_k\), the set of all subsets of k elements contained in some fixed \(\mathbb{R}^N\), and defined a metric on it. In the talk I will
describe subsets D of \(\mathfrak{M}_k\) such that D contains open and dense subsets of \(\mathfrak{M}_k\), and have, moreover, the property that CS(M) is a tree for all M in D. In particular, the
“general case” is for CS to be a tree.
|
{"url":"https://www.kth.se/math/kalender/juan-m-alonso-graphs-associated-to-finite-metric-spaces-1.1158155?date=2022-04-12&orgdate=2022-02-03&length=1&orglength=332","timestamp":"2024-11-06T10:29:38Z","content_type":"text/html","content_length":"58534","record_id":"<urn:uuid:ee313f7b-d266-47ba-8706-b98b718b2d82>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00396.warc.gz"}
|
: A
or symbol multiplying a
or an unknown
in an algebraic term, as 4 in the term 4x, or x in the term x(a+b).
: A
numerical measure
of a physical or
property that is constant for a
under specified conditions such as the
coefficient of friction
|
{"url":"https://m.everything2.com/title/Coefficient","timestamp":"2024-11-02T18:02:30Z","content_type":"text/html","content_length":"29959","record_id":"<urn:uuid:9832ba82-a75f-445e-9bb0-1251be679079>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00360.warc.gz"}
|
Consequence of the Concept of the Universe as a Computer
The ACM’s Ubiquity has been running a symposium on the question What is Computation?. Amusingly they let a slacker like me take a shot at the question and my essay has now been posted: Computation
and Fundamental Physics. As a reviewer of the article said, this reads like an article someone would have written after attending a science fiction convention. Which I think was supposed to be an
insult, but which I take as a blessing. For the experts in the audience, the fun part starts at the “Fundamental Physics” heading.
4 Replies to “Consequence of the Concept of the Universe as a Computer”
1. Science fiction can be very instructive. RE: concept of a universe as a computer, see TRON: LEGACY.
2. Thanks for pointing out the work by McCann and Pippenger; I’m excited now to look at McCann’s thesis and their paper.
3. I have a question about Preskil’s view that universe is a fault tolerant computer:
Usually the micro-processes where the information is lost, and the processes at larger scales where information is preserved, are described by different physical descriptions – the latter being
some kind of approximation or coarse graining of the former.
But in Preskill’s argument, the micro and macro processes are both based on the same process – quantum unitary evolution.
This doesn’t seem likely IMHO.
4. Daniel, I too particularly liked the passage in Dave’s essay that read:
At short time scales, information is repeatedly being destroyed, but at longer scales, because of some form of effective encoding and error correction, non-information-destroying quantum
theory is a good approximation to describing physics.
Moreover, the extension of this concept to concrete calculations is well underway, as follows.
(1) Lindblad dynamics has provided us with a mathematically well-posed description of information-destroying quantum processes (indeed, the entirety of Nielsen and Chuang’s textbook is founded on
the postulate that Lindblad dynamics is the sole means by which quantum information can be destroyed).
(2) Thanks to QIT pioneers like Howard Carmichael (who invented the term “quantum unraveling”), Carlton Caves, and Stephen Adler, we know how to unravel discrete Lindblad processes, in complete
generality, as continuous (stochastic) dynamical processes.
(3) Thanks to the analytic insights of Morse theory, the physical insights of QIT pioneers like Wojciech Zurek, and the informatic and thermodynamic insights of mathematicians like Terry Tao, we
appreciate that Lindbladian dynamical processes compress quantum trajectories onto low-dimension state-spaces that (fortunately!) inherit all of the (beautiful!) Kählerian structure of Hilbert
space, while being far richer geometrically.
(4) Thanks to mid-20th century dynamics pioneers like Cartan, Arnold, Smale, Kolmogorov, Mac Lane, and Marsden, and their (subsequent) quantum counterparts—in particular Abhay Ashtekar and Troy
Schilling—we have all the mathematical tools we need to pullback Lindbladian dynamics onto non-Hilbert state-spaces … by methods that are geometrically natural and are computationally efficient.
Thus, if we follow-through on Dave’s line of inquiry (and that Preskill/ Ogburn/Â Hawking etc.), we are led to appreciate that quantum dynamics generically occurs on non-Hilbert state-spaces,
according to all that we already know about quantum dynamical processes.
As for whether Nature’s own state-space is Hilbert-flat versus Kähler-curved … or Hilbert-flat versus Einstein-dynamic … well … in the short term these issues are immaterial, because insofar as
practical computations are concerned, the issue is decided: the viewpoint of modern dynamics is that all state-spaces (both classical and quantum) are effectively non-flat and dynamical.
If you think about it, this represents an unexpected transformation in our understanding of quantum dynamics, that is substantially more radical in its implications for mathematics, science, and
engineering, than anything that ARDA’s QIST panels conceived back in 2002.
We thus appreciate that the post-Hilbert quantum world that Dave’s essay envisions is not some future dream: that post-Hilbert world is right here, right now, and it is burgeoning exponentially
in its practical scope and global strategic consequence.
The overall point is simply this: the advent of post-Hilbert quantum dynamics is good news for everyone on the planet … and especially, it’s very good news for young CT/QIT/QSE researchers.
And on that optimistic note, please let me extend my best wishes … for Happy Holidays to all! 🙂
|
{"url":"https://dabacon.org/pontiff/2010/12/21/consequence-of-the-concept-of-the-universe-as-a-computer/","timestamp":"2024-11-13T20:42:45Z","content_type":"text/html","content_length":"103652","record_id":"<urn:uuid:62b93302-f788-4412-8742-63c22c5ef25f>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00114.warc.gz"}
|
Building capital ships like a boss, part 3: Component blueprints
Ship blueprints
Component blueprints (you are here)
Researching your blueprints
Moving capital blueprints safely
Location, location, location
Moving minerals
Actually building stuff, finally
Selling your ships
So you know which ships you want to build. Now you need the component blueprints to build the components so you can use the ship blueprints to build the ships. Which component blueprints? How many?
Researched to what level? Help!
Here's a table of which components are required for which ships, how much they cost, and where you can find them.
EDIT 2014-5-24: The naglfar no longer uses the launcher hardpoint component.
That's not the whole story, though. Ships use more of some components than others, so you are likely to need duplicates of the most common blueprints in order to build efficiently. How many and of
which blueprint depends on your selection of ship blueprints, but the method to work out how many components you need is as follows:
-Make a spreadsheet.
-Make a list of how many components of each type are required to build all your ships.
-In a second column, list how many component blueprints of each type to get.
-A third column is set to equal the number in the first column, divided by the number in the second column, multiplied by 2.866, divided by 24. This is how many days it will take to build the
components for all your ships (with component PE 20).
What you want to do then is adjust the number in the second column so that the number in the third column is 10-ish. Maybe a little more if your ship blueprints are heavy on rorquals, since they take
longer to build than carriers or dreads.
Confused yet? Good. This tool is included in my spreadsheet (see part 6 of this guide). Here's how it will look for someone building one of each carrier and dread, plus a rorqual, all at optimal ME:
By the way, it's okay if you have a few more total blueprints than manufacturing slots. You'll always need fewer of some components than others, so some runs will be shorter and the blueprints will
have some downtime. It's like overbooking an airplane flight, except you only have yourself to be angry at when things go horribly wrong.
Material level:
What material level should you research components to? Up to you, but here's a table with build costs and savings for a rorqual at different levels of component research. This is from before
crucible, and today all the cost and savings numbers will be twice as large.
I decided to go with ME100. It's a nice round number, and also happens to be a sweet spot for reselling component blueprints on contracts when you decide to get rid of them.
Production efficiency level:
Okay, how about production efficiency?
Here's a table with time per run and time savings at several PE levels.
I went with PE20.
Go to part 4:
Researching your blueprints
|
{"url":"https://eve-fail.blogspot.com/2012/04/building-capital-ships-like-boss-part-3.html","timestamp":"2024-11-06T05:15:01Z","content_type":"text/html","content_length":"77348","record_id":"<urn:uuid:0ed924f1-b0d8-4f02-8745-0be81169255e>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00606.warc.gz"}
|
On Cochran Theorem (and Orthogonal Projections)
Cochran Theorem – from The distribution of quadratic forms in a normal system, with applications to the analysis of covariance published in 1934 – is probably the most import one in a regression
course. It is an application of a nice result on quadratic forms of Gaussian vectors. More precisely, we can prove that if $\boldsymbol{Y}\sim\mathcal{N}(\boldsymbol{0},\mathbb{I}_d)$ is a random
vector with $d$$\mathcal{N}(0,1)$ variable then (i) if $A$ is a (squared) idempotent matrix $\boldsymbol{Y}^\top A\boldsymbol{Y}\sim\chi^2_r$ where $r$ is the rank of matrix $A$, and (ii) conversely,
if $\boldsymbol{Y}^\top A\boldsymbol{Y}\sim\chi^2_r$ then $A$ is an idempotent matrix of rank $r$. And just in case, $A$ is an idempotent matrix means that $A^2=A$, and a lot of results can be
derived (for instance on the eigenvalues). The prof of that result (at least the (i) part) is nice: we diagonlize matrix $A$, so that $A=P\Delta P^\top$, with $P$ orthonormal. Since $A$ is an
idempotent matrix observe thatA^2=P\Delta P^\top=P\Delta P^\top=P\Delta^2 P^\topwhere $\Delta$ is some diagonal matrix such that $\Delta^2=\Delta$, so terms on the diagonal of $\Delta$ are either $0$
or $1$‘s. And because the rank of $A$ (and $\Delta$) is $r$ then there should be $r$$1$‘s and $d-r$$1$‘s. Now write\boldsymbol{Y}^\top A\boldsymbol{Y}=\boldsymbol{Y}^\top P\Delta P^\top\boldsymbol{Y}
=\boldsymbol{Z}^\top \Delta\boldsymbol{Z}where $\boldsymbol{Z}=P^\top\boldsymbol{Y}$ that satisfies$\boldsymbol{Z}\sim\mathcal{N}(\boldsymbol{0},PP^\top)$ i.e. $\boldsymbol{Z}\sim\mathcal{N}(\
boldsymbol{0},\mathbb{I}_d)$. Thus \boldsymbol{Z}^\top \Delta\boldsymbol{Z}=\sum_{i:\Delta_{i,i}-1}Z_i^2\sim\chi^2_rNice, isn’t it. And there is more (that will be strongly connected actually to
Cochran theorem). Let $A=A_1+\dots+A_k$, then the two following statements are equivalent (i) $A$ is idempotent and $\text{rank}(A)=\text{rank}(A_1)+\dots+\text{rank}(A_k)$ (ii) $A_i$‘s are
idempotents, $A_iA_j=0$ for all $ieq j$.
Now, let us talk about projections. Let $\boldsymbol{y}$ be a vector in $\mathbb{R}^n$. Its projection on the space $\mathcal V(\boldsymbol{v}_1,\dots,\boldsymbol{v}_p)$ (generated by those $p$
vectors) is the vector $\hat{\boldsymbol{y}}=\boldsymbol{V} \hat{\boldsymbol{a}}$ that minimizes $\|\boldsymbol{y} -\boldsymbol{V} \boldsymbol{a}\|$ (in $\boldsymbol{a}$). The solution is\hat{\
boldsymbol{a}}=( \boldsymbol{V}^\top \boldsymbol{V})^{-1} \boldsymbol{V}^\top \boldsymbol{y} \text{ and } \hat{\boldsymbol{y}} = \boldsymbol{V} \hat{\boldsymbol{a}}
Matrix $P=\boldsymbol{V} ( \boldsymbol{V}^\top \boldsymbol{V})^{-1} \boldsymbol{V}^\top$ is the orthogonal projection on $\{\boldsymbol{v}_1,\dots,\boldsymbol{v}_p\}$ and $\hat{\boldsymbol{y}} = P\
Now we can recall Cochran theorem. Let $\boldsymbol{Y}\sim\mathcal{N}(\boldsymbol{\mu},\sigma^2\mathbb{I}_d)$ for some $\sigma>0$ and $\boldsymbol{\mu}$. Consider sub-vector orthogonal spaces $F_1,\
dots,F_m$, with dimension $d_i$. Let $P_{F_i}$ be the orthogonal projection matrix on $F_i$, then (i) vectors $P_{F_1}\boldsymbol{X},\dots,P_{F_m}\boldsymbol{X}$ are independent, with respective
distribution $\mathcal{N}(P_{F_i}\boldsymbol{\mu},\sigma^2\mathbb{I}_{d_i})$ and (ii) random variables $\|P_{F_i}(\boldsymbol{X}-\boldsymbol{\mu})\|^2/\sigma^2$ are independent and $\chi^2_{d_i}$
We can try to visualize those results. For instance, the orthogonal projection of a random vector has a Gaussian distribution. Consider a two-dimensional Gaussian vector
r = .7
s1 = 1
s2 = 1
Sig = matrix(c(s1^2,r*s1*s2,r*s1*s2,s2^2),2,2)
Y = rmnorm(n = 1000,mean=c(0,0),varcov = Sig)
vu = seq(-4,4,length=101)
vz = outer(vu,vu,function (x,y) dmnorm(cbind(x,y),
mean=c(0,0), varcov = Sig))
Consider now the projection of points $\boldsymbol{y}=(y_1,y_2)$ on the straight linear with directional vector $\overrightarrow{\boldsymbol{u}}$ with slope $a$ (say $a=2$). To get the projected
point $\boldsymbol{x}=(x_1,x_2)$ recall that $x_2=ay_1$ and $\overrightarrow{\boldsymbol{x},\boldsymbol{y}}\perp\overrightarrow{\boldsymbol{u}}$. Hence, the following code will give us the orthogonal
p = function(a){
P = p(2)
for(i in 1:20) segments(Y[i,1],Y[i,2],P[i,1],P[i,2],lwd=4,col="red")
Now, if we look at the distribution of points on that line, we get… a Gaussian distribution, as expected,
z = sqrt(P[,1]^2+P[,2]^2)*c(-1,+1)[(P[,1]>0)*1+1]
vu = seq(-6,6,length=601)
vv = dnorm(vu,mean(z),sd(z))
hist(z,probability = TRUE,breaks = seq(-4,4,by=.25))
Or course, we can use the matrix representation to get the projection on $\overrightarrow{\boldsymbol{u}}$, or a normalized version of that vector actually
U = c(1,a)/sqrt(a^2+1)
[1] 0.4472136 0.8944272
matP = U %*% solve(t(U) %*% U) %*% t(U)
matP %*% Y[1,]
[1,] -0.1120555
[2,] -0.2241110
x0 y0
-0.1120555 -0.2241110
(which is consistent with our manual computation). Now, in Cochran theorem, we start with independent random variables,
Y = rmnorm(n = 1000,mean=c(0,0),varcov = diag(c(1,1)))
Then we consider the projection on $\overrightarrow{\boldsymbol{u}}$ and $\overrightarrow{\boldsymbol{v}}=\overrightarrow{\boldsymbol{u}}^\perp$
U = c(1,a)/sqrt(a^2+1)
matP1 = U %*% solve(t(U) %*% U) %*% t(U)
P1 = Y %*% matP1
z1 = sqrt(P1[,1]^2+P1[,2]^2)*c(-1,+1)[(P1[,1]>0)*1+1]
V = c(a,-1)/sqrt(a^2+1)
matP2 = V %*% solve(t(V) %*% V) %*% t(V)
P2 = Y %*% matP2
z2 = sqrt(P2[,1]^2+P2[,2]^2)*c(-1,+1)[(P2[,1]>0)*1+1]
We can plot those two projections
and observe that the two are indeed, independent Gaussian variables. And (of course) there squared norms are $\chi^2_{1}$ distributed.
OpenEdition suggests that you cite this post as follows:
Arthur Charpentier (January 15, 2020). On Cochran Theorem (and Orthogonal Projections). Freakonometrics. Retrieved November 9, 2024 from https://doi.org/10.58079/ovec
This site uses Akismet to reduce spam. Learn how your comment data is processed.
|
{"url":"https://freakonometrics.hypotheses.org/59040","timestamp":"2024-11-09T22:01:34Z","content_type":"text/html","content_length":"157059","record_id":"<urn:uuid:13a669c7-5ff7-442d-b903-c92aafb8fa13>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00535.warc.gz"}
|
Prajapati Vishva Ashram
Mathematics has been traced to exist as a knowledge among peoples of Indus Valley Civilization with Harappan culture as has been proved by continuing archeological excavations in lands presently
known as bhaart or India and Pakistan. Indus Valley Civilization is acknowledged as the oldest civilization on this planet earth. The continuing archeological research indicates that this
civilization included well planned and engineered cities, towns and villages integrated with extensive Harappan urban and rural cultures around dating back to at least 6500 BC and continued to
dominate the region for at least 700 years, from 2600 to 1900 B.C. It was only in the 1920's that the buried cities and villages of the Indus valley were recognized by archaeologists as representing
an undiscovered civilization. The buildings, artifacts and special seals found in Harappan culture sites indicates that the culture practiced veDik lifestyles. This Harappan culture of veDik
lifestyle is proved to have extended to east Europe as proved by archeological research in areas around present day Turkey. The Babylonians and Egyptians who have an history dating back to 3500 BC
also has evidence of interaction with Harappan culture. The war of mHaa-bhaart was fought around 5000 years ago (3000 BC) and included the entire lands of Asia, Africa and Europe as can be deduced
from the description of the armies of different peoples who took part in the 18-day war which killed about 1.7 billion peoples. In all these different lands the history of mathematics is traced by
the following events:
• circa 3000 BC: The Babylonians of ancient Mesopotamia and the ancient Egyptians left the earliest records of organized mathematics. Well-preserved Babylonian clay tablets show wedge-shaped
writing known as cuneiform. The earliest tablets date from about 3000 BC. Arithmetic and algebra dominated their mathematics for commerce to exchange of money and merchandise, to compute simple
and compound interest, to calculate taxes, and to allocate shares of a harvest to the state, temple, and farmer. The building of canals, granaries, and other public works also required using
arithmetic and geometry. Calendar reckoning, used to determine the times for planting and for religious events, was another important application of mathematics.
• To circa 31 BC: The Greeks adopted elements of mathematics from the Babylonians and the Egyptians in circa 600 BC with through the development of mathematics by Thales and Pythagoras. Greek
mathematicians invented abstract mathematics based on deductive proof based on logical, axioms and proofs of the mathematical concepts. The mathematics that had existed before their time was a
collection of conclusions based on observation of the nature and natural phenomenon. The development of Greek mathematic ended in 31 BC with Roman conquest of Egypt, the last of Greek Alexander’s
• To circa 476 AD: Nothing mathematically significant was accomplished by the Romans. The Roman numeration system was based on Roman numerals, which were cumbersome for calculation. Roman orator
Cicero boasted that the Romans were not dreamers like the Greeks but applied their study of mathematics to the useful.
• TO circa 800 AD: After the decline of Greece and Rome, mathematics flourished in India. Mathematics in India was largely a tool for astronomy, yet Indian mathematicians discovered a number of
important concepts. The current Western numeration system, for example, is based on the Indian system which was conveyed to Western world by Arabs and is generally known as the Hindu-Arabic
The system of numbers in current use with each number having an absolute value and a place value (units, tens, hundreds, and so forth) originated in India. Indian Mathematicians were the first to
recognize zero as both an integer and a placeholder.
Mathematics existed in veDik lifestyle as is shown in science of jyotiSH which is one of the 6 veDNg meaning limbs of veD. veD contains all the sciences of knowledge of creation and life. As per
NaarD puraaAN, Chapter 54 the science of jyotiSH was relayed by pRjaapti BRH`maa^o, to Daevtaao^o, RUSHio^o and munio^o for the use by humans for the fulfillment of their duties as per their
DHARm^o. The science of jyotiSH was expounded by in 400,000 s`lok (aphorisms in verse forms) which are divided into 3 categories:
1. gNit = mathematics and astronomy
2. jaatk = horoscopy
3. sMhitaa = astrology
The above means that knowledge of mathematics is encoded in all creations but is manifested as knowledge in individual creations as per the needs of the individual creation in the "space-time
continuum" and as warranted by the occurrence of intersections of "world lines" as per the relativity theory developed by Albert Einstein in his science discoveries of 1905 to 1915. To date only
parts of veD texts have been deciphered with the SNskrut grammar knowledge developed by pANiANi in circa 500 BC. And even with this limited deciphering of the veDik texts in sNskrut language the
concepts in these texts are still very hard to understand. But the veD texts deciphered to date has numbers everywhere explaining space in multiple dimensions, time in various reckonings as per
the location in space of the domains of various existences of various creations, quantities of all materials in creation and also natural processes in the universe for which the veD text is meant
for. We will share with each other more knowledge on this after knowing the history of development of mathematics as humans know in 2000 AD.
□ ( pRjaapti BRH`maa, per veD, is the creator, as empowered by creator BRH`m, of all that exists in a universe. Daevtaao are all entities who are which are manifestations of creator BRH`m's
powers and forces in nature). RUSHio and munio are sages who were versed in veD and whose function is to look after the welfare of all creations through their powers obtained through tps and
yGN. tps means totally focused meditation on creator BRH`m resulting in the meditative mode called smaaDHi whereby the meditator is blessed by creator BRH`m with the powers to be sub-creator
and/or sub-sustainer and/or sub-re-creator in birth-death cycles called sNsaar in BRH'm's stead. yGN is a process and a rite prescribed in veD in which oblation is offered to creator BRH`m or
any of His manifestations. It is stated in veD that tps and yGN is what creates, sustains and recycles the created through a process called ly a universe and everything in that universe.
DHARm is the laws, rules and regulations in nature and among the created that empowers the harmonious co-existence of all created in a universe.)
It is not known when the Indian numeration system as is in current existence around the world was developed. But digits similar to the Arabic numerals used today have been found in a Hindu temple
built about 250 BC.
In circa 500 AD, Indian mathematician and astronomer Aryabhata expounded beyond the Greek mathematician in his use of fractions as opposed to whole numbers to solve indeterminate equations
(equations that have no unique solutions). The mathematical development by Aryabhata has already been outlined in the previous serial titled "veDik MATHEMATICS - What does it mean".
In circa 630 AD Indian mathematician Brahmagupta expounded the concept of negative numbers in mathematics for the first time in the history of current mathematics. He presented rules for them in
terms of fortunes (positive numbers) and debts (negative numbers). Brahmagupta’s understanding of numbers exceeded that of other mathematicians of the time. He also made full use of the place
system in existence in India in his method of multiplication. Brahmagupta wrote two treaties on mathematics and astronomy dealing with topics such as eclipses, risings and settings, and
conjunctions of the planets with each other and with fixed stars.
In circa 900 AD, Jain mathematician Mahavira stated rules for operations with zero.
In circa 1200 AD Bhaskara supplied the correct answer for division by zero as well as rules for operating with irrational numbers. Bhaskara wrote six treaties on mathematics, including Lilavati
meaning The Beautiful, which summarized mathematical knowledge existing in India up to his time, and Karanakutuhala, meaning “Calculation of Astronomical Wonders.”
• 800 AD TO 1400: In 800 AD, Indian mathematics reached Baghdaad, a major early center of Islam culture. Mathematical masterpieces of Indians and those of the Greeks were translated into Arabic in
centers of Islamic learning, where mathematical discoveries continued during the period known in the West as the Middle Ages.
In circa 800 AD Arab mathematician al-Khwaarizmī wrote a systematic introduction to algebra called Kitaab al-jabr w’al Muqabalah meaning Book of Restoring and Balancing. The English word algebra
comes from al-jabr in the title of this mathematical treatise. Al-Khwaarizmī’s algebra was founded on Brahmagupta’s work, which he duly credited, and showed the influence of Babylonian and Greek
mathematics as well. A 12th-century Latin translation of al-Khwaarizmī’s treatise was crucial for the later development of algebra in Europe. Al-Khwaarizmī’s name is the source of the word
In 900 AD Arab scholars completed the acquiring of all Indian and Greek mathematics in existence at that time and began further development.
From 900 AD to 1000 AD Alhazen, who was an outstanding Arab scientist, developed algebraic solutions of quadratic and cubic equations. Al-Karaji continued development of the algebra of
polynomials (mathematical expressions that are the sum of a number of terms) of al-Khwaarizmī with polynomials with an infinite number of terms. Geometers such as Ibrahim ibn Sinan continued
Archimedes’s investigations of areas and volumes, and Kamal al-Din and others applied the theory of conic sections to solve problems in optics.
In circa 1100 AD, Persian mathematician Omar Khayyam and other Arab mathematicians, solved certain cubic equations geometrically by using conic sections. Arab astronomers contributed the tangent
and cotangent to trigonometry.
In circa 1200, Arab astronomer Nasir al-Din al-Tusi created the mathematical disciplines of plane and spherical trigonometry and separated trigonometry as a stand-alone from astronomy. Arab
mathematicians made important discoveries in the theory of numbers and also developed a variety of numerical methods for solving equations.
The Arab mathematicians through translations in Arabic preserved many of the Greek mathematics in existence to 1500 AD when civilization in Europe was in stagnation after the destruction of Roman
empire. Europe re-acquired much of this translated Greek mathematics along with the Indian mathematics when mathematics was re-translated into Latin which was the written language of educated
Europeans starting sometime 1100 AD.
With the blessing of creator BRH`m we will continue reviewing the history of the development of mathematics spearheaded in Europe in the Enlightenment Age (from circa 1500 AD) after the completion of
Middle Ages. Through mathematics Europe started making a fast headway in understanding the veD = SCIENCES OF CREATION AND LIFE in the universe the Europeans and the rest of humankind knew....
om tt st......om bRH`myae nmH......
|
{"url":"http://prajapati-samaj.ca/shownews.asp?NewsID=639","timestamp":"2024-11-03T12:46:41Z","content_type":"text/html","content_length":"25303","record_id":"<urn:uuid:5c9ccce5-40b0-415e-80db-6f5d32384375>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00203.warc.gz"}
|
ACT and SAT Math Tips: Thanksgiving Day Drills - ACT and SAT Blog
Just because the holidays are rolling around does not mean you can skip your ACT and SAT prep! You can use your time off this week to focus on the test while you’re not worrying about other school
work. Here are a few Turkey Day math questions to kick start your day of family, football, and sweet, sweet gluttony.
Happy Thanksgiving from everyone at PowerScore!
Image courtesy of Shutterstock
1. DIAGRAM the question, where P = potatoes, B = butter, and M = milk:
So 2/10 of the recipe is butter. Now TRANSLATE:
2/10 of 6 pounds is butter
2/10 × 6 pounds = butter
12/10 pounds = butter
1.2 pounds = butter Yuck.
Answer choice (C) is correct.
2. DIAGRAM the question:
We know the radius is 4 because the diameter is 8. Now all we need to find is the length of the arc along the edge of the crust on the piece of pie.
To find the length of arc, we simply multiply the circumference of the whole pie by the fraction of the arc in question. Since the pie was cut into eight equal pieces, we are looking for 1/8 of the
2πr × 1/8 → 2π(4) × 1/8 → 8π × 1/8 → π
Thus, the perimeter of a piece of pie is 4 + 4 + π → 8 + π
Answer choice (A) is correct.
3. DIAGRAM the question:
Let’s name the grandchildren Adam, Bev, Cait, Dave, Eduardo, Felicia, and Michael, each represented by the letter of their first name in the diagram. There is only one “person” that can sit in
Peaches’ spot—Peaches. Place a 1 in her chair.
Now look to the left of her seat. All of the grandkids EXCEPT Michael can sit there. So that is 6 people (which we label the chair). Let’s put Adam in that seat.
Now look to the right of Peaches’ seat. Adam is already sitting and Michael is not allowed to sit next to Peaches, so that leaves 5 grandkids. Let’s give this seat to Bev.
Go to the far right seat at the head of the table. Adam and Bev are sitting already, so that leaves 5 grandchildren: Cait, Dave, Eduardo, Felicia, and Michael (Micheal is added to the mix now that
the two seats next to Peaches are taken). Give this seat to Cait.
Work your way around the table, listing who is available and crossing off who is assigned the seat. When you finish marking down the possibilities for each seat, you simply multiply those
6 × 1 × 5 × 5 × 4 × 3 × 2 × 1 = 3600
Answer choice (C) is correct.
Want some more help? Check our our ACT offerings!
If you have any questions, feel free to contact me at vwood@powerscore.com.
Leave a Reply Cancel reply
You must be logged in to post a comment.
|
{"url":"https://blog.powerscore.com/sat/turkeydaydrills/","timestamp":"2024-11-05T19:37:19Z","content_type":"text/html","content_length":"37182","record_id":"<urn:uuid:3cf3afc4-6c6c-49ac-a1a5-8acb6aa51f0d>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00253.warc.gz"}
|
Multi-Stage Dantzig Selector
Part of Advances in Neural Information Processing Systems 23 (NIPS 2010)
Ji Liu, Peter Wonka, Jieping Ye
We consider the following sparse signal recovery (or feature selection) problem: given a design matrix $X\in \mathbb{R}^{n\times m}$ $(m\gg n)$ and a noisy observation vector $y\in \mathbb{R}^{n}$
satisfying $y=X\beta^*+\epsilon$ where $\epsilon$ is the noise vector following a Gaussian distribution $N(0,\sigma^2I)$, how to recover the signal (or parameter vector) $\beta^*$ when the signal is
sparse? The Dantzig selector has been proposed for sparse signal recovery with strong theoretical guarantees. In this paper, we propose a multi-stage Dantzig selector method, which iteratively
refines the target signal $\beta^*$. We show that if $X$ obeys a certain condition, then with a large probability the difference between the solution $\hat\beta$ estimated by the proposed method and
the true solution $\beta^*$ measured in terms of the $l_p$ norm ($p\geq 1$) is bounded as \begin{equation*} \|\hat\beta-\beta^*\|_p\leq \left(C(s-N)^{1/p}\sqrt{\log m}+\Delta\right)\sigma, \end
{equation*} $C$ is a constant, $s$ is the number of nonzero entries in $\beta^*$, $\Delta$ is independent of $m$ and is much smaller than the first term, and $N$ is the number of entries of $\beta^*$
larger than a certain value in the order of $\mathcal{O}(\sigma\sqrt{\log m})$. The proposed method improves the estimation bound of the standard Dantzig selector approximately from $Cs^{1/p}\sqrt{\
log m}\sigma$ to $C(s-N)^{1/p}\sqrt{\log m}\sigma$ where the value $N$ depends on the number of large entries in $\beta^*$. When $N=s$, the proposed algorithm achieves the oracle solution with a high
probability. In addition, with a large probability, the proposed method can select the same number of correct features under a milder condition than the Dantzig selector.
|
{"url":"https://proceedings.nips.cc/paper_files/paper/2010/hash/e5f6ad6ce374177eef023bf5d0c018b6-Abstract.html","timestamp":"2024-11-08T19:19:22Z","content_type":"text/html","content_length":"9530","record_id":"<urn:uuid:0564fd1c-9397-4137-b22f-917a88a6e1c8>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00393.warc.gz"}
|
sulfur reduction of iron ore mineral processing flotation
Sulfur reduction in Sangan iron ore by flotation. Bahram Rezaee 1, Atefe Sarvi 2*, Atiyeh Eslamian 2, Seyed MehdiJebraeeli 2 and Abolfazl Zabihi 3. 1 Mineral Processing, professor, mining and
metallurgical engineering department, Amirkabir university of technology Tehran, Iran 2 Mineral Processing, Expert...
|
{"url":"https://www.esko.net.pl/2024_10_24+34616.html","timestamp":"2024-11-05T13:14:48Z","content_type":"text/html","content_length":"47688","record_id":"<urn:uuid:8be559cf-30ae-45b2-a530-d8e2a7231fd2>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00062.warc.gz"}
|
Fisher’s Fiducial Inference for Parameters of Uniform Distribution
Fisher’s Fiducial Inference for Parameters of Uniform Distribution ()
1. Introduction
In 1930 Fisher proposed an inference method based on the idea of fiducial probability [1,2]. Fisher’s fiducial inference has been much applied in practice. The fiducial argument stands out somewhat
of an enigma in classical statistics. The enigma mentioned above need statistical scholar to solve.
Fisher’s fiducial inference for the parameters of a totality
The example below shows that Neyman-Pearson’s confidence interval has some place to be improved. Let
Appling pivotal function
And using its density
the 95% confidence interval of
The length of interval (*) is independent of the sample value! Assam that
Is got in a certain sample (Note that
Fisher’s fiducial inference offered a selection in solving the problems similar with above.
2. Fiducial Distribution
Let that
It is not difficult to show that Y and Z are the minimum and maximum order statistics of the sample from
See parameters
Applying the relative results about the transformation of r.v.’s, it can be show that:
Theorem 1. The fiducial density function of vector
If only one parameter need to be considered, the another parameter is then so-called nuisance parameter. We insist that the marginal distribution should be used in this situation. Hence find the two
marginal density functions of
Corollary 1. The fiducial density functions of only one parameters
3. Estimation
It is easy to see that fiducial density
Theorem 2. The maximum fiducial estimators of
It can also be got that
To find the median of
And get
Found the median of
Theorem 3. The fiducial median estimators of
The maximum fiducial estimators
It can be shown that:
Theorem 4. The fiducial expect estimators of
The fiducial probability that
In the same way
Give a fiducial probability
Theorem 5. The
Proof. Denote that
Using (3.7) it can be derived that
And the bellow equation can be got by using (3.8)
Let us consider the
for a certain d > 0, because
Theorem 6. The
Proof. At first Equation (3.12) has a positive solution d because its left side equal to 1 when
Equation (3.12) is used here. □
4. The Case That One Parameter Is in Variation
Let us consider the case that only one parameter is in variation.
It should noted that using (2.4) and (2.6) the conditional density of
Comparing (4.1) and (4.2) is to say that (4.1) is coincided with the conditional density of
The maximum fiducial estimators, the fiducial median estimators and the fiducial expect estimators of
The fiducial probability of one interval estimator
The similar results for
If there is a relation between the parameters, such as the example in Section 1, this situation may be thought as missing parameter(s). We insist that the conditional distribution should be used in
this situation. Under the condition that
It can be seen that for distribution
is a 100% fiducial interval of
is a
Using the above results to the example in Section 1 it can be got that any subinterval of [0.89, 1.08] with the length 0.95 × 0.19 is the 95% fiducial interval of
5. Hypothesis Testing
Let us consider the hypothesis testing problem. Equation (3.7) and (3.8) can be used to calculate the fiducial probability when the parameter would belong to the range that a certain hypothesis is
Theorem 7. For hypothesis
And should rejected H[1] w.p.1 if
Proof. Choice
If for a certain[0] when
Note that the left hand of (5.2) is the quantile of order
[0] when
Theorem 8. For hypothesis
The fiducial probability
Proof. The result can be got just like theorem 7. □
The parallel results for
Theorem 9. Hypothesis
The fiducial probability
This theorem can be got by calculating the above integral. □
The fiducial probability in the situation that the parameters would belong to the range that a certain hypothesis in Theorem 7 or 8 is true can be easily got by using (4.3) in the case that one
parameter is in variation.
Example. For the example in Section 1, consider the hypothesis
It can be shown that
If for a certain
So the criterion is that reject H[0] when
Please note that the left hand of (5.4) is the quantile of order [0] when
That is
6. Discussion
Up to now, the discussion on Fisher’s fiducial inference has still remained intuitive and imprecise. There are two problems: 1) Just what a fiducial probability means? 2) How can one derive the only
fiducial distribution of the parameter(s)? Paper [4] considered the 1st problem. For the 2nd problem we guess that two sufficient statistics of least dimension, whose dimension is coincides with the
parameter(s), must derive the same fiducial distribution of the parameter(s). And we insist that the marginal distribution should be used in the situation when there is (are) nuisance parameter(s);
and that the conditional distribution should be used in the situation when there is (are) (a) relation(s) between the parameters.
|
{"url":"https://scirp.org/journal/paperinformation?paperid=30357","timestamp":"2024-11-03T03:09:57Z","content_type":"application/xhtml+xml","content_length":"112565","record_id":"<urn:uuid:bad7aae5-e757-479c-a092-b72e393bc154>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00765.warc.gz"}
|
3-isogeny selmer groups and ranks of abelian varieties in quadratic twist families over a number field
For an abelian variety A over a number field F , we prove that the average rank of the quadratic twists of A is bounded, under the assumption that the multiplicationby- 3-isogeny on A factors as a
composition of 3-isogenies over F . This is the first such boundedness result for an absolutely simple abelian variety A of dimension greater than 1. In fact, we exhibit such twist families in
arbitrarily large dimension and over any number field. In dimension 1, we deduce that if E/F is an elliptic curve admitting a 3-isogeny, then the average rank of its quadratic twists is bounded. If F
is totally real, we moreover show that a positive proportion of twists have rank 0 and a positive proportion have 3-Selmer rank 1. These results on bounded average ranks in families of quadratic
twists represent new progress toward Goldfeld's conjecture- which states that the average rank in the quadratic twist family of an elliptic curve over Q should be 1/2-and the first progress toward
the analogous conjecture over number fields other than Q. Our results follow from a computation of the average size of the Φ-Selmer group in the family of quadratic twists of an abelian variety
admitting a 3-isogeny Φ.
All Science Journal Classification (ASJC) codes
Dive into the research topics of '3-isogeny selmer groups and ranks of abelian varieties in quadratic twist families over a number field'. Together they form a unique fingerprint.
|
{"url":"https://collaborate.princeton.edu/en/publications/3-isogeny-selmer-groups-and-ranks-of-abelian-varieties-in-quadrat","timestamp":"2024-11-09T04:02:05Z","content_type":"text/html","content_length":"48913","record_id":"<urn:uuid:937201aa-d912-4989-9a76-98bece521edb>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00650.warc.gz"}
|
What is a logistic regression in R?
Logistic regression is a statistical technique used to analyze the relationship between a categorical dependent variable and one or more independent variables. In R, the glm() function (generalized
linear model) can be used to perform logistic regression.
Here's an example of how to perform logistic regression using glm() in R:
1 # load the dataset
2 data <- read.csv("data.csv")
4 # fit the logistic regression model
5 model <- glm(y ~ x1 + x2 + x3, data = data, family = binomial)
7 # summary of the model
8 summary(model)
In this example, y is the dependent variable and x1, x2, and x3 are the independent variables. The family argument is set to binomial because we are performing logistic regression.
The summary() function provides information on the coefficients, standard errors, and significance levels for each variable in the model, as well as goodness-of-fit statistics like the deviance and
|
{"url":"https://devhubby.com/thread/what-is-a-logistic-regression-in-r","timestamp":"2024-11-02T04:33:16Z","content_type":"text/html","content_length":"117400","record_id":"<urn:uuid:473dbe63-8105-46e6-a4c2-b597b0b668e4>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00871.warc.gz"}
|
Jacob Bernoulli
Swiss mathematician best known for his work on probability theory, analytic geometry and development of the calculus.
Also developed the field of calculus of variations.
Developed the technique of separation of variables, and in $1696$ solved what is now known as Bernoulli's (Differential) Equation.
Invented polar coordinates.
Elder brother of Johann Bernoulli, with whom he famously quarrelled.
He and Johann, having encountered Leibniz's early papers in his Acta Eruditorum, became his most important students.
Solved the Brachistochrone problem, which had been posed in $1696$ by his brother Johann.
Also investigated the catenary and the logarithmic spiral.
• Born: 27 Dec 1654 in Basel, Switzerland
• 1687: Became Professor of Mathematics at Basel
• Died: 16 Aug 1705 in Basel, Switzerland
Theorems and Definitions
Results named for Jacob Bernoulli can be found here.
Definitions of concepts named for Jacob Bernoulli can be found here.
Notable Quotes
Invito patre sidera verso (Against my father's will I study the stars)
-- Personal motto, created in memory of his father who opposed his desire to study mathematics and astronomy and tried to force him to study to become a theologian.
Also known as
Jacob Bernoulli is also known as James, Jacques or Jakob.
Sometimes reported as Jakob I, or Jacob I, so as to distinguish him from Jakob II Bernoulli.
Also see
|
{"url":"https://proofwiki.org/wiki/Mathematician:Jacob_Bernoulli","timestamp":"2024-11-13T09:32:45Z","content_type":"text/html","content_length":"52503","record_id":"<urn:uuid:880afae1-0f65-420f-9d19-9ce5eb9b2017>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00249.warc.gz"}
|
Double Choco
Double Choco is a logic puzzle published by Nikoli. In a rectangular or square grid exactly half of cells are painted gray. The aim is to divide the grid into regions. Each region must contain one
area of white cells and one area of gray cells. A pair of areas must be of the same shape and size (the areas may be rotated or mirrored). A number indicates how many cells of the same color the
region contains. A region may contain more than one cell with a number (in this case the cells contain the same number).
Cross+A can solve puzzles from 4 x 4 to 30 x 30.
|
{"url":"https://cross-plus-a.com/html/cros7dbch.htm","timestamp":"2024-11-07T00:59:33Z","content_type":"text/html","content_length":"1568","record_id":"<urn:uuid:8e12d88d-b735-4b70-a9cb-234025fabe46>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00215.warc.gz"}
|
Statistical Distributions
Every statistics book provides a listing of statistical distributions, with their properties, but browsing through these choices can be frustrating to anyone without a statistical background, for two
reasons. First, the choices seem endless, with dozens of distributions competing for your attention, with little or no intuitive basis for differentiating between them. Second, the descriptions tend
to be abstract and emphasize statistical properties such as the moments, characteristic functions and cumulative distributions. In this appendix, we will focus on the aspects of distributions that
are most useful when analyzing raw data and trying to fit the right distribution to that data.
Fitting the Distribution
When confronted with data that needs to be characterized by a distribution, it is best to start with the raw data and answer four basic questions about the data that can help in the characterization.
The first relates to whether the data can take on only discrete values or whether the data is continuous; whether a new pharmaceutical drug gets FDA approval or not is a discrete value but the
revenues from the drug represent a continuous variable. The second looks at the symmetry of the data and if there is asymmetry, which direction it lies in; in other words, are positive and negative
outliers equally likely or is one more likely than the other. The third question is whether there are upper or lower limits on the data;; there are some data items like revenues that cannot be lower
than zero whereas there are others like operating margins that cannot exceed a value (100%). The final and related question relates to the likelihood of observing extreme values in the distribution;
in some data, the extreme values occur very infrequently whereas in others, they occur more often.
Is the data discrete or continuous?
The first and most obvious categorization of data should be on whether the data is restricted to taking on only discrete values or if it is continuous. Consider the inputs into a typical project
analysis at a firm. Most estimates that go into the analysis come from distributions that are continuous; market size, market share and profit margins, for instance, are all continuous variables.
There are some important risk factors, though, that can take on only discrete forms, including regulatory actions and the threat of a terrorist attack; in the first case, the regulatory authority may
dispense one of two or more decisions which are specified up front and in the latter, you are subjected to a terrorist attack or you are not.
With discrete data, the entire distribution can either be developed from scratch or the data can be fitted to a pre-specified discrete distribution. With the former, there are two steps to building
the distribution. The first is identifying the possible outcomes and the second is to estimate probabilities to each outcome. As we noted in the text, we can draw on historical data or experience as
well as specific knowledge about the investment being analyzed to arrive at the final distribution. This process is relatively simple to accomplish when there are a few outcomes with a
well-established basis for estimating probabilities but becomes more tedious as the number of outcomes increases. If it is difficult or impossible to build up a customized distribution, it may still
be possible fit the data to one of the following discrete distributions:
a. Binomial distribution: The binomial distribution measures the probabilities of the number of successes over a given number of trials with a specified probability of success in each try. In the
simplest scenario of a coin toss (with a fair coin), where the probability of getting a head with each toss is 0.50 and there are a hundred trials, the binomial distribution will measure the
likelihood of getting anywhere from no heads in a hundred tosses (very unlikely) to 50 heads (the most likely) to 100 heads (also very unlikely). The binomial distribution in this case will be
symmetric, reflecting the even odds; as the probabilities shift from even odds, the distribution will get more skewed. Figure 6A.1 presents binomial distributions for three scenarios – two with 50%
probability of success and one with a 70% probability of success and different trial sizes.
Figure 6A.1: Binomial Distribution
As the probability of success is varied (from 50%) the distribution will also shift its shape, becoming positively skewed for probabilities less than 50% and negatively skewed for probabilities
greater than 50%.
b. Poisson distribution: The Poisson distribution measures the likelihood of a number of events occurring within a given time interval, where the key parameter that is required is the average number
of events in the given interval (l). The resulting distribution looks similar to the binomial, with the skewness being positive but decreasing with l. Figure 6A.2 presents three Poisson
distributions, with l ranging from 1 to 10.
Figure 6A.2: Poisson Distribution
c. Negative Binomial distribution: Returning again to the coin toss example, assume that you hold the number of successes fixed at a given number and estimate the number of tries you will have before
you reach the specified number of successes. The resulting distribution is called the negative binomial and it very closely resembles the Poisson. In fact, the negative binomial distribution
converges on the Poisson distribution, but will be more skewed to the right (positive values) than the Poisson distribution with similar parameters.
d. Geometric distribution: Consider again the coin toss example used to illustrate the binomial. Rather than focus on the number of successes in n trials, assume that you were measuring the
likelihood of when the first success will occur. For instance, with a fair coin toss, there is a 50% chance that the first success will occur at the first try, a 25% chance that it will occur on the
second try and a 12.5% chance that it will occur on the third try. The resulting distribution is positively skewed and looks as follows for three different probability scenarios (in figure 6A.3):
Figure 6A.3: Geometric Distribution
Note that the distribution is steepest with high probabilities of success and flattens out as the probability decreases. However, the distribution is always positively skewed.
e. Hypergeometric distribution: The hypergeometric distribution measures the probability of a specified number of successes in n trials, without replacement, from a finite population. Since the
sampling is without replacement, the probabilities can change as a function of previous draws. Consider, for instance, the possibility of getting four face cards in hand of ten, over repeated draws
from a pack. Since there are 16 face cards and the total pack contains 52 cards, the probability of getting four face cards in a hand of ten can be estimated. Figure 6A.4 provides a graph of the
hypergeometric distribution:
Figure 6A.4: Hypergeometric Distribution
Note that the hypergeometric distribution converges on binomial distribution as the as the population size increases.
f. Discrete uniform distribution: This is the simplest of discrete distributions and applies when all of the outcomes have an equal probability of occurring. Figure 6A.5 presents a uniform discrete
distribution with five possible outcomes, each occurring 20% of the time:
Figure 6A.5: Discrete Uniform Distribution
The discrete uniform distribution is best reserved for circumstances where there are multiple possible outcomes, but no information that would allow us to expect that one outcome is more likely than
the others.
With continuous data, we cannot specify all possible outcomes, since they are too numerous to list, but we have two choices. The first is to convert the continuous data into a discrete form and then
go through the same process that we went through for discrete distributions of estimating probabilities. For instance, we could take a variable such as market share and break it down into discrete
blocks – market share between 3% and 3.5%, between 3.5% and 4% and so on – and consider the likelihood that we will fall into each block. The second is to find a continuous distribution that best
fits the data and to specify the parameters of the distribution. The rest of the appendix will focus on how to make these choices.
How symmetric is the data?
There are some datasets that exhibit symmetry, i.e., the upside is mirrored by the downside. The symmetric distribution that most practitioners have familiarity with is the normal distribution, sown
in Figure 6A.6, for a range of parameters:
Figure 6A.6: Normal Distribution
The normal distribution has several features that make it popular. First, it can be fully characterized by just two parameters – the mean and the standard deviation – and thus reduces estimation
pain. Second, the probability of any value occurring can be obtained simply by knowing how many standard deviations separate the value from the mean; the probability that a value will fall 2 standard
deviations from the mean is roughly 95%. The normal distribution is best suited for data that, at the minimum, meets the following conditions:
a. There is a strong tendency for the data to take on a central value.
b. Positive and negative deviations from this central value are equally likely
c. The frequency of the deviations falls off rapidly as we move further away from the central value.
The last two conditions show up when we compute the parameters of the normal distribution: the symmetry of deviations leads to zero skewness and the low probabilities of large deviations from the
central value reveal themselves in no kurtosis.
There is a cost we pay, though, when we use a normal distribution to characterize data that is non-normal since the probability estimates that we obtain will be misleading and can do more harm than
good. One obvious problem is when the data is asymmetric but another potential problem is when the probabilities of large deviations from the central value do not drop off as precipitously as
required by the normal distribution. In statistical language, the actual distribution of the data has fatter tails than the normal. While all of symmetric distributions in the family are like the
normal in terms of the upside mirroring the downside, they vary in terms of shape, with some distributions having fatter tails than the normal and the others more accentuated peaks. These
distributions are characterized as leptokurtic and you can consider two examples. One is the logistic distribution, which has longer tails and a higher kurtosis (1.2, as compared to 0 for the normal
distribution) and the other are Cauchy distributions, which also exhibit symmetry and higher kurtosis and are characterized by a scale variable that determines how fat the tails are. Figure 6A.7
present a series of Cauchy distributions that exhibit the bias towards fatter tails or more outliers than the normal distribution.
Figure 6A.7: Cauchy Distribution
Either the logistic or the Cauchy distributions can be used if the data is symmetric but with extreme values that occur more frequently than you would expect with a normal distribution.
As the probabilities of extreme values increases relative to the central value, the distribution will flatten out. At its limit, assuming that the data stays symmetric and we put limits on the
extreme values on both sides, we end up with the uniform distribution, shown in figure 6A.8:
Figure 6A.8: Uniform Distribution
When is it appropriate to assume a uniform distribution for a variable? One possible scenario is when you have a measure of the highest and lowest values that a data item can take but no real
information about where within this range the value may fall. In other words, any value within that range is just as likely as any other value.
Most data does not exhibit symmetry and instead skews towards either very large positive or very large negative values. If the data is positively skewed, one common choice is the lognormal
distribution, which is typically characterized by three parameters: a shape (s or sigma), a scale (m or median) and a shift parameter (). When m=0 and =1, you have the standard lognormal distribution
and when =0, the distribution requires only scale and sigma parameters. As the sigma rises, the peak of the distribution shifts to the left and the skewness in the distribution increases. Figure 6A.9
graphs lognormal distributions for a range of parameters:
Figure 6A.9: Lognormal distribution
The Gamma and Weibull distributions are two distributions that are closely related to the lognormal distribution; like the lognormal distribution, changing the parameter levels (shape, shift and
scale) can cause the distributions to change shape and become more or less skewed. In all of these functions, increasing the shape parameter will push the distribution towards the left. In fact, at
high values of sigma, the left tail disappears entirely and the outliers are all positive. In this form, these distributions all resemble the exponential, characterized by a location (m) and scale
parameter (b), as is clear from figure 6A.10.
Figure 6A.10: Weibull Distribution
The question of which of these distributions will best fit the data will depend in large part on how severe the asymmetry in the data is. For moderate positive skewness, where there are both positive
and negative outliers, but the former and larger and more common, the standard lognormal distribution will usually suffice. As the skewness becomes more severe, you may need to shift to a
three-parameter lognormal distribution or a Weibull distribution, and modify the shape parameter till it fits the data. At the extreme, if there are no negative outliers and the only positive
outliers in the data, you should consider the exponential function, shown in Figure 6a.11:
Figure 6A.11: Exponential Distribution
If the data exhibits negative slewness, the choices of distributions are more limited. One possibility is the Beta distribution, which has two shape parameters (p and q) and upper and lower bounds on
the data (a and b). Altering these parameters can yield distributions that exhibit either positive or negative skewness, as shown in figure 6A.12:
Figure 6A.12: Beta Distribution
Another is an extreme value distribution, which can also be altered to generate both positive and negative skewness, depending upon whether the extreme outcomes are the maximum (positive) or minimum
(negative) values (see Figure 6A.13)
Figure 6A.13: Extreme Value Distributions
Are there upper or lower limits on data values?
There are often natural limits on the values that data can take on. As we noted earlier, the revenues and the market value of a firm cannot be negative and the profit margin cannot exceed 100%. Using
a distribution that does not constrain the values to these limits can create problems. For instance, using a normal distribution to describe profit margins can sometimes result in profit margins that
exceed 100%, since the distribution has no limits on either the downside or the upside.
When data is constrained, the questions that needs to be answered are whether the constraints apply on one side of the distribution or both, and if so, what the limits on values are. Once these
questions have been answered, there are two choices. One is to find a continuous distribution that conforms to these constraints. For instance, the lognormal distribution can be used to model data,
such as revenues and stock prices that are constrained to be never less than zero. For data that have both upper and lower limits, you could use the uniform distribution, if the probabilities of the
outcomes are even across outcomes or a triangular distribution (if the data is clustered around a central value). Figure 6A.14 presents a triangular distribution:
Figure 6A.14: Triangular Distribution
An alternative approach is to use a continuous distribution that normally allows data to take on any value and to put upper and lower limits on the values that the data can assume. Note that the cost
of putting these constrains is small in distributions like the normal where the probabilities of extreme values is very small, but increases as the distribution exhibits fatter tails.
How likely are you to see extreme values of data, relative to the middle values?
As we noted in the earlier section, a key consideration in what distribution to use to describe the data is the likelihood of extreme values for the data, relative to the middle value. In the case of
the normal distribution, this likelihood is small and it increases as you move to the logistic and Cauchy distributions. While it may often be more realistic to use the latter to describe real world
data, the benefits of a better distribution fit have to be weighed off against the ease with which parameters can be estimated from the normal distribution. Consequently, it may make sense to stay
with the normal distribution for symmetric data, unless the likelihood of extreme values increases above a threshold.
The same considerations apply for skewed distributions, though the concern will generally be more acute for the skewed side of the distribution. In other words, with positively skewed distribution,
the question of which distribution to use will depend upon how much more likely large positive values are than large negative values, with the fit ranging from the lognormal to the exponential.
In summary, the question of which distribution best fits data cannot be answered without looking at whether the data is discrete or continuous, symmetric or asymmetric and where the outliers lie.
Figure 6A.15 summarizes the choices in a chart.
Tests for Fit
The simplest test for distributional fit is visual with a comparison of the histogram of the actual data to the fitted distribution. Consider figure 6A.16, where we report the distribution of current
price earnings ratios for US stocks in early 2007, with a normal distribution superimposed on it.
Figure 6A.16: Current PE Ratios for US Stocks – January 2007
The distributions are so clearly divergent that the normal distribution assumption does not hold up.
A slightly more sophisticated test is to compute the moments of the actual data distribution – the mean, the standard deviation, skewness and kurtosis – and to examine them for fit to the chosen
distribution. With the price-earnings data above, for instance, the moments of the distribution and key statistics are summarized in table 6A.1:
Table 6A.1: Current PE Ratio for US stocks – Key Statistics
│ │Current PE│Normal Distribution ││││
│Mean │ 28.947 │ ││││
│Median │ 20.952 │ Median = Mean ││││
│Standard deviation │ 26.924 │ ││││
│Skewness │ 3.106 │ 0 ││││
│Kurtosis │ 11.936 │ 0 ││││
Since the normal distribution has no skewness and zero kurtosis, we can easily reject the hypothesis that price earnings ratios are normally distributed.
The typical tests for goodness of fit compare the actual distribution function of the data with the cumulative distribution function of the distribution that is being used to characterize the data,
to either accept the hypothesis that the chosen distribution fits the data or to reject it. Not surprisingly, given its constant use, there are more tests for normality than for any other
distribution. The Kolmogorov-Smirnov test is one of the oldest tests of fit for distributions , dating back to 1967. Improved versions of the tests include the Shapiro-Wilk and Anderson-Darling
tests. Applying these tests to the current PE ratio yields the unsurprising result that the hypothesis that current PE ratios are drawn from a normal distribution is roundly rejected:
Tests of Normality
There are graphical tests of normality, where probability plots can be used to assess the hypothesis that the data is drawn from a normal distribution. Figure 6A.17 illustrates this, using current PE
ratios as the data set.
Given that the normal distribution is one of easiest to work with, it is useful to begin by testing data for non-normality to see if you can get away with using the normal distribution. If not, you
can extend your search to other and more complex distributions.
Raw data is almost never as well behaved as we would like it to be. Consequently, fitting a statistical distribution to data is part art and part science, requiring compromises along the way. The key
to good data analysis is maintaining a balance between getting a good distributional fit and preserving ease of estimation, keeping in mind that the ultimate objective is that the analysis should
lead to better decision. In particular, you may decide to settle for a distribution that less completely fits the data over one that more completely fits it, simply because estimating the parameters
may be easier to do with the former. This may explain the overwhelming dependence on the normal distribution in practice, notwithstanding the fact that most data do not meet the criteria needed for
the distribution to fit.
Figure 6A.15: Distributional Choices
|
{"url":"https://pages.stern.nyu.edu/~adamodar/New_Home_Page/StatFile/statdistns.htm","timestamp":"2024-11-14T01:37:28Z","content_type":"text/html","content_length":"114379","record_id":"<urn:uuid:a4e438a2-2ff5-4ff8-9383-f2b112e1f7a7>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00429.warc.gz"}
|
Salary in Maryland
Float pool nurses in Maryland earn an average of $90,467 per year (or $43.49 per hour).
6% higher than the national average
Your personal salary estimate
Free from Incredible Health
Maryland float pool nurses earn 6% higher than the national average salary for float pool nurses, at $84,768 (or $40.75 per hour).
Float pool nurse salary range in Maryland
Annual Salary Hourly Wage
90th Percentile $111,251 $53
75th Percentile $107,015 $51
Median $85,750 $41
25th Percentile $71,194 $34
80% of Maryland float pool nurses earn between $66,126 and $111,251.
Cost-of-living adjusted float pool nurse salary in Maryland
Cost-Of-Living Adjusted
Adjusted for cost-of-living, Maryland float pool nurses earn about $84,946 per year. Cost-of-living in Maryland is 6% higher than the national average, meaning they face higher prices for food,
housing, and transportation compared to other states.
Float pool nurses salaries in other states
California $124,815 per year
Massachusetts $96,630 per year
Washington $103,662 per year
New York $99,852 per year
Minnesota $85,142 per year
Arizona $93,257 per year
Colorado $80,942 per year
Texas $84,012 per year
Illinois $77,983 per year
Wisconsin $73,950 per year
How much do other nurses get paid in Maryland?
Clinical Informatics Nurse $99,891 per year
HIV Nurse $94,237 per year
Reproductive Nurse $94,237 per year
Hematology Nurse $90,467 per year
Infusion Nurse $90,213 per year
Public Health Nurse $89,525 per year
Nurse Manager $88,582 per year
Research Nurse $87,640 per year
Quality Assurance Nurse $87,640 per year
Pediatric Critical Care Nurse $87,640 per year
At a $90,467 average annual salary, float pool nurses in Maryland tend to earn less than clinical informatics nurses ($99,891), HIV nurses ($94,237), and reproductive nurses ($94,237). They tend to
earn more than hematology nurses ($90,467), infusion nurses ($90,213), public health nurses ($89,525), nurse managers ($88,582), research nurses ($87,640), quality assurance nurses ($87,640), and
pediatric critical care nurses ($87,640).
More about float pool nurses
A float pool nurse serves as a flexible resource of nurses who are ready to adapt to versatile roles in a healthcare system. This resourceful pool is often created to fill in short-staffed units and
relieve other nurses during their meals and other mandatory breaks.
Free nursing salary estimate
Get a personalized salary estimate for your location and nursing credentials.
|
{"url":"https://www.incrediblehealth.com/salaries/s/float-pool-nurse/md","timestamp":"2024-11-04T01:50:36Z","content_type":"text/html","content_length":"76992","record_id":"<urn:uuid:b20d35d9-da2d-486a-acfd-3cb8dfccabd3>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00730.warc.gz"}
|
Hogwarts Legacy - How to Solve the Arithmancy Puzzle Doors
Hogwarts Legacy – How to Solve the Arithmancy Puzzle Doors
Arithmancy Puzzle Doors Solution
It is not necessary to locate the cipher in order to solve the Arithmancy Door Puzzles. You can open every Arithmancy Door you come across as long as you are aware of what each symbol’s worth is in
The Arithmancy Door puzzles can be a little frightening to solve, whether you have the cipher or not, but they are lot simpler than they appear. Simply approach the door to begin the puzzle’s
solution and a list of equations will appear. The core number in each equation will be surrounded by three other numbers.
The sum of the three outer numbers must equal the number in the middle.
The catch is that some of the symbols will be replaced by a question mark and possibly a creature-like symbol that corresponds to one of the symbols visible surrounding the door’s arch. Starting from
left to right, the symbols represent the numbers 0 through 9. Each of these symbols is a number.
Fortunately, you can find the solution to the substitution equation and open the door.
Using the above door as an example, you’ll always want to start from the central number. So, we’ll start with nine, and then you’ll want to subtract both two and the dragon-like symbol, which, when
using the cipher, shows that its numerical value is three.
This means that the ? is equal to four – which is represented by the bird symbol.
To confirm this result, add all three of these numbers together, and you’ll get nine. Now, simply repeat this step for the second equation – see the equation below:
Now that you’ve worked out the value of ? and ??, roll the large triangle symbols beside each door, so the question marks represent the symbol of each numerical value.
You may now access all the hidden stuff inside, most of which will be gear, once the symbols have been rolled into position.
Be the first to comment
|
{"url":"https://gameplay.tips/guides/hogwarts-legacy-how-to-solve-the-arithmancy-puzzle-doors.html","timestamp":"2024-11-14T17:21:50Z","content_type":"text/html","content_length":"69460","record_id":"<urn:uuid:9154c0d1-307a-4295-ad6a-a3b8de8a3af4>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00418.warc.gz"}
|
Two Palindromes
Number Theory
Two Palindromes
Two integers differ by 22. Each, when multiplied by its successor, yields an eight-digit palindrome. What is the smaller of the two?
Photo by Silvio Kundt on Unsplash
Last week, in a blog post titled Finding Patterns, I introduced a puzzle. I asked my teenager to solve it using only a pen/pencil and paper.
The Puzzle
Two integers differ by 22. Each, when multiplied by its successor, yields an eight-digit palindrome. What is the smaller of the two?
Three Steps
Using a collaborative effort between human and machine, we can achieve more, while having fun solving problems and finding new patterns. Let’s try this in three steps. Step (I) is purely human
effort. Step (II) is all brute power of the machine. Step (III) is a middle ground, a collaborative effort.
Photo by Ali Yahya on Unsplash
Step I: Human Effort
This section features human effort alone, without the use of the internet, or a computer program. To get started, we have to translate the puzzle from English to the language of Mathematics.
• Let p and q be the two integers with q > p
• We are told p + 22 = q (p being smaller of the two)
• The successor of p is (p+1) and that of q is (q+1)
• Let the two palindromes be P = abcddcba and Q = wxyzzyxw
• Finding p is our objective
Given these facts, we can write them as math equations:
Translation of puzzle from English to Math Equations
abcddcba and wxyzzyxw are the two palindromes with each letter denoting a digit — each different or with some overlap, we don’t know.
The Basics
At the risk of losing a few readers, I will try to be as accommodating as possible. If you know this already, you may skip to Spotting Patterns section.
• Integers are whole numbers, or counting numbers. Although not explicitly spelled out, this puzzle involves positive integers.
-7, -8, -67 are a few negative integers56, 117, 2938 are a few positive integers
• Palindrome is a number that reads the same forwards and backwards.
25744752, 5665, 77, 8 are integers that are palindromes
• A successor to an integer is one that follows immediately after. Referred together, the two are consecutive integers.
7 is the successor to 6.
28 is a successor to 27.
Decimal Expansion
Any integer (in base-ten system or the decimal system) can be expressed as a sum of powers of 10. For example, the decimal expansion of 5665 is
Decimal Expansion of a number
Prime Factors
Any integer can be expressed as a product of its prime factors. A prime number is divisible by 1 and itself. 2 is an even prime. The rest of them are odd.
Prime factors of 5665 are 5, 11 and 103 because 5665 = 5 x 11 x 103
Spotting Patterns
Photo by Silvio Kundt on Unsplash
(I) Test of divisibility by 11
A test of divisibility by 11 is interesting. For any integer, reading left to right, take the sums of alternate digits in that number. You must end up with two sums.
If the sums differ by a multiple of 11, the number is divisible by 11.
EXAMPLE: Take the number 947683 for instanceAlternate digits are S = {9, 7, 8} and T = {4, 6, 3}The sum of S = 9 + 7 + 8 = 24 The sum of T = 4 + 6 + 3 = 13Their difference is (T - S) = (13 - 24) = 11
x (-1), divisible by 11Therefore 947683 is divisible by 11
In fact, 947683 = 86153 x 11
(II) Consecutive integers
Even integers are divisible by 2. Odd integers are not. Two additional interesting facts about a pair of consecutive integers are:
1) One of them is odd, and the other even. Their product is even
2) They share no prime factors. Their highest common factor is 1
(III) Perfect squares
Two consecutive integers are of the form {2k, 2k+1} where k = {0, 1, 2, 3, …} A perfect square must be of the form {(2k)², (2k+1)²} = {4k², 4k(k+1) + 1}.
A perfect square is exactly divisible by 4, OR It leaves 1 as remainder when divided by 4
(IV) Perfect square ending in 5
A curious fact about perfect squares ending in 5 is that the penultimate digit is always 2! If a number ends in 5, it’s square will always end in 25.
EXAMPLES: Say you want to find 45². The only arithmetic we need is 4x5 = 20. Then attach 25 to it. The answer is 20|25!45² = 4(4+1)| 25 = 2025 Other examples:
75² = 7(7+1)| 25 = 5625
115² = 11(12+1)| 25 = 13225(10a+5)² = 100a² + 100a + 25 = 100a(a+1) + 25
If you look at the decimal expansion of a square ending in 5, this pattern is easy to spot. Notice that no matter what ‘a’ is, the result will end in 25. Also the other digits in the number are just
product of two consecutive numbers (a) (a+1) shifted left by two digits to make way for 25!
(V) Quadratic discriminant
In a quadratic equation ax²+ bx + c = 0, the discriminant (Greek symbol Delta) given by Δ= ⎷(b²- 4ac) must be a perfect square. We want the solutions of x to be whole numbers!
Δ is a perfect square for a quadratic equation with integral solutions
(VI) Palindrome with even number of digits
There are four pairs of integers that make up each palindrome. Let’s look at the decimal expansion of P for instance.
Decimal Expansion of P (and similarly Q) have 11 as a prime factor.
Each term is divisible by 11. Notice how each digit is paired with another identical digit in a different location such that their sum {10000001, 100001, 1001, 11} is divisible by 11. P and Q must
therefore, be divisible by 11. We can extend this to any palindrome with even number of digits. Using modular arithmetic and congruence relations, we can show they are divisible by 11. It is a unique
A palindrome that has an even number of digits is divisible by 11
Detective Work
Let’s start with digits {a, w} that can assume any of the values {0,1,2,...9}. But can they? Not really, because we have some enticing clues!
Process of Elimination
Using Pattern (II), both P and Q must be even. So they must begin (and end) with the following digits {2, 4, 6, 8}. I have excluded {0} because they cannot be zero if we want the palindromes to be
eight-digits long! So we have eliminated {1, 3, 5, 7, 9}. That’s a reduction by 5/9 = 55.5%, right off the bat!
{a,w} = {2, 4, 6, 8}
P = p² + p is a quadratic equation with integer solutions. Using Pattern (V), the discriminant must be a perfect square. Comparing it with ax² + bx + c = 0 we have
a = 1, b = 1, c = -P and Δ² = (b²- 4ac) = 1 + 4P
Using Pattern (III), perfect squares are exactly divisible by 4 or leave a remainder 1 when divided by 4. We also know a set of rightmost digits of P and Q. Let’s check what the rightmost digits of 1
+ 4P are.
P = {2bcddcb2, 4bcddcb4, 6bcddcb6, 8bcddcb8}Δ²= 1 + 4 {2bcddcb2, 4bcddcb4, 6bcddcb6, 8bcddcb8}Δ² ends in digits {9, 7, 5, 3}There aren't any perfect squares that end in 7 or 3 because they are
neither divisible by 4 nor leave a remainder 1 when divided by 4. We can apply the same logic to Q
P cannot be { 4bcddcb4, 8bcddcb8}. And Q cannot be {4xyzzyz4, 8xyzzyx4}. Using this pattern, we managed to cut the possibilities in half again!
{a,w} = {2, 6}
Since the palindrome has eight-digits, both p and q must be four-digit numbers. Starting with {a} = {6} for P, we can show {w} = {0} for Q, which is a contradiction! Let’s understand how.
Follow along using Table 1. Take row #5 for instance, where {a,w} = {6}. Two consecutive numbers (p, p+1) can end in (…2, …3) or (…7, …8) if the last digit of P is required to be 6. The leading dots
denote digits we don’t yet know.
Since q = p + 22, the digit endings for (q, q+1) will either be (…4, …5) or (…9, …0). Neither of those pairs, when multiplied give us a,w = {6} for Q. Using this logic, all pairs except the rows in
green can be ruled out. {a, w} cannot be {6}.
We did it again, cut the solution space in half!
Table 1: Digit endings for p, p+1, q, q+1, P and Q when a, w = {2, 6}
When {a, w } = {2}, p ends in digits {1, 6}
{a,w} = {2} which implies the following P = 2bcddcb2 and p = { ...1, ...6 } Q = 2xyzzyx2 and q = { ...3, ...8 }
We know the largest and smallest eight-digit palindromes that begin (and end) with 2. They are bound by P(min, max) = {20000002, 29999992}. We can find out the leading (i.e, first) digit of p using
an approximation that doesn’t alter its rightmost digit.
If p² + p = P, approximately p² ~ P and p ~ ⎷P
p is in the range p(min, max) = { ⎷20000002, ⎷29999992 } ~ [4500, 5500 )
I have used Pattern (IV) — technically we don’t need a calculator because 5500² =55² x 10⁴ = 30250000 and similarly 4500² = 45² x 10⁴ = 20250000
The first digit two digits of p are in this set {45, 46, 47,…, 53, 54}. Notice I have excluded 55 because 55² > 29999992 and included 45 because 45² > 20000002. We can write this more compactly:
p = {4mn1, 4mn6, 5st1, 5st6} m = {5,6,7,8,9} and s={0,1,2,3,4}
Using the Pattern (II) and (VI), {2, 11} are the prime factors of P = p (p+1) and Q = q(q+1). But p cannot have both 2 and 11 as its prime factors. That’s because they are consecutive, their highest
common (prime) factor is 1. So we have to distribute {2, 11} between {p, p+1}.
Case 1: p is odd
If p is odd, 11 is its prime factor. In that case, p+1 is even and divisible by 2
p = {4mn1} m = {5,6,7,8,9} or p = {5st1} s={0,1,2,3,4}
Consider p = 45n1 {m=5}: Apply the test of divisibility by 11. n + 4 = 5 + 1, which gives us n = 2. For the rest of {m, n} and {s,t} we can tabulate possible values of p. If n > 9, we discard the
number because n is a digit!
Table 2: Odd valued p divisible by 11
Case 2: p is even
If p is even it is divisible by 2. In that case, p+1 is odd and 11, its factor.
p = {4mn6} m = {5,6,7,8,9} or p = {5st6} s={0,1,2,3,4}
Consider p = 45n6. In this case p+1 = 45n7 is divisible by 11. n + 4 = 7+ 5, which gives us n = 8. For the rest of {m, n} and {s,t} we can tabulate possible values of p. Again, if n > 9, we discard
the number because n is a digit! If p > 5470 we discard it.
Table 3: Odd valued (p+1) divisible by 11
Potential Suspects
Our detective work is almost complete. We are down to a short list of suspects (sixteen candidates). There could be one or more culprits that fit the pattern!
p = { 4521, 4631, 4741, 4851, 4961, 5071, 5181, 5291, 5401 }
p = { 4586, 4696, 5026, 5136, 5246, 5356, 5466 }
At this point, one could plug these numbers in a pocket calculator and check. For example, let’s test p = 4521
p = 4521
P = p(p+1) = 4521(4522) = 20443962We can save ourselves some time with long multiplication by spotting more patterns! If we multiply the last two digits 21 x 22 = 462 Similarly the first two digits
45 x 45 = 2025Notice the first two digits of P are 20, and last two are 62Since P isn't a palindrome, there is no need to check Q. We move on.
I don’t know of any other patterns. I explored prime factors, digit endings of penultimate (tens place) digits and properties of consecutive integers and palindromes. But I ran out of ideas. I am
interested in any other clever ideas to reduce the solution set further! Using simple numerical patterns for integers, we managed to reduce the solution space to scan. The bar chart shows how this
reduction came about. Initial count was 9000, finally we are left with 16.
Figure 1: Palindrome count reduction using patterns. Patterns used vs. palindromes to test
The Culprit
It turns out only p = 5291, q = 5313 have this unique property where they differ by 22 and each when multiplied by it’s successor yields an eight-digit palindrome. We have finally solved the problem
by finding the values of p that have this property!
p = 5291 = (11 ⨯ 13 ⨯ 37)p+1 = 5292 = (2² ⨯ 3² ⨯ 7²)P = p(p+1) = 5291(5292) = 27999972 Bingo! a palindrome ⏤⏤⏤⏤⏤⏤⏤⏤⏤⏤⏤⏤⏤⏤⏤⏤⏤⏤⏤⏤⏤⏤⏤q = p + 22 = 5313 = (3 ⨯ 7 ⨯ 11 ⨯ 23)q+1 = 5314 = (2 ⨯ 2657)Q = q
(q+1) = 5313(5314) = 28233282 Another palindrome!
Table 4: Candidates for palindromic product. Only p = 5291, q = 5313 fit the pattern
Step II: Computing Power
In this section, we will explore the power of gigahertz clock speed! Anyone with basic coding/programming experience can appreciate the difference in speed compared to a human with a pen/paper.
Photo by Clément H on Unsplash
I am using a MacBook with 16 GB RAM, 2.5 GHz Intel Core i7 processor.
How Many Eight-Digit Palindromes?
For curiosity’s sake, let’s find out how many eight-digit palindromes there are. The following code snippet provides the answer. It takes ~10.0 ± 0.2 s.
Answer: There are 9000 eight-digit integer palindromes
How Fast?
This is extremely slow for a simple program. Modern laptops have processing speeds that easily exceed 10 billion steps or operations every second. A CPU clock cycle is roughly 0.3 ns (less than half
a nanosecond). But to appreciate how fast the machine really operates, we need something we (as humans) can relate to.
We understand and can relate to one second (One Mississippi). Let’s say we arbitrarily assign 1 CPU cycle ( 0.3 ns) to be one second.
1 CPU cycle = (0.3 ns or 0.0000000003 s) = 1 s
Given this scale, how long is 10.0 seconds it took to generate all eight-digit palindromes?
10s / 0.0000000003 = 1056 years!
Code Snippet 1: How many eight-digit palindromes are there?
We can do a lot better at generating and counting, if we know what a palindrome looks like. And we certainly do. We know half the digits (the first four or last four), we can construct the other
half. Let us use this pattern to generate and count how many eight-digit palindromes are there.
P = 11000*i - 9900* (i//10) - 990 * (i//100) - 99*(i//1000)
i = [1000, ..., 9999]
Code Snippet 2: Generate all eight-digit palindromes
Using this pattern, we can generate and count them all in 4.0 ± 0.2 milliseconds (ms) which is a significant (three orders of magnitude) improvement!
Brute Force
We can scan all eight-digit palindromes and test for this property. Let’s find out how long it takes to generate, check and find integers that have this property. The program is shown below. It takes
about (7.1 ± 0.3 ms)
Code snippet 3: Generate and test all palindromes properties for solving the puzzle
Step III: Collaboration
Two Palindromes. Image by Author
We went through the manual exercise in Step I. We managed to cut the solution space by half, in three steps. By the time we were done, we had a handful of candidates to verify. One could use any or
all of the patterns to solve the problem at hand. I have chosen to start here:
p = {4mn1, 4mn6, 5st1, 5st6} m = {5,6,7,8,9} and s={0,1,2,3,4}
Let’s code this and find out how long it takes! The code snippet is shown below. It takes (55.6 ± 9 μs) which is a thousand-fold reduction!
Now let’s compare how fast the execution is in human relatable time. Let’s recall we started with 1056 years to list all the eight-digit palindromes. If we calculate how long, we get about 2 days!
That’s a remarkable reduction which went unnoticed. As humans it is hard to wrap our heads around both large and small numbers.
56 μs ~ 2 days
Code Snippet 4: Use patterns to test only a subset of candidates that conform to a set of patterns
Figure 2: Comparison of code-execution times: Using patterns has a huge advantage over brute force alone!
Collaborate, Create
Humans and machines are both powerful entities. We have evolved the capability to solve complex problems with our pattern recognition skills, capacity for abstract thought, and creative talents. If
we collaborate with the machine to utilize its super-human strength and intelligence, the human-computer collaboration would make a winning combination!
1. Some Problems on the Prime Factors of Integers — P. Erdös, L. Selfridge, Illinois J. Math., Volume 11, Issue 3 (1967), 428–430. Link to PDF
2. Divisibility by Eleven — Mudd Math Fun Facts
3. Positive numbers k such that k*(k+1) is a palindrome. A028336 — OEIS, Patrick De Geest
4. Problem originally appeared in Mindsport, Sunday Times of India, By Mukul Sharma, Early 1990's
5. The rise of human-computer cooperation, TED Talk 2012, Shyam Sankar
6. Coding Horror blog post: The infinite space between words, 2014, Jeff Atwood
©️ Venkat Kaushik 2020. All Rights Reserved.
|
{"url":"https://www.cantorsparadise.org/two-palindromes-f94c39677e26/","timestamp":"2024-11-14T01:59:47Z","content_type":"text/html","content_length":"55522","record_id":"<urn:uuid:149119f6-46aa-4e50-8bc6-1e2e85ad9c40>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00707.warc.gz"}
|
Special purpose charts Archives - Free List to Chart Online
Compare diverse datasets easily, make informed decisions, and engage your audience with striking, easy-to-understand visuals. Ideal for business, academic, or personal use.
Did you love the function?
Bookmark This Page
for easy access next time.
Our ‘Side by Side Comparison Bar Chart’ service helps you show two sets of data next to each other. You can use this to compare things easily. For example, if you want to show how many boys and girls
are in each month of the year, or how many products you sold this year compared to last year, this tool can help.
You just need to put your data into the tool, and it will make two bar charts for you. These charts will be next to each other, so you can see the differences between the two sets of data clearly.
This tool is good for anyone who needs to compare two sets of numbers or amounts. It’s easy to use and understand, so you can start comparing your data right away
Instant-Interactive Math Graph Generator
Simply enter any math formula and -x to x limits and instantly get downloadable graph chart. Math formula graph builder
For usage read the usage section below:
Did you love the function? Bookmark This Page for easy access next time.
This service is a tool to make a chart from any basic math formula. You can write multiple formulas in a one chart, each on a new line.
For example, if you want to see the line for “xx”, you write “xx” in the formula box. You can also write other formulas like “xxx”, “sin(x)”, and more. There are two kinds of charts you can choose
from. One is ‘line’ and the other is ‘area’. ‘Line’ will make lines for each formula. ‘Area’ will fill in the space under the lines. You need to tell the tool what are the smallest and largest ‘x’
values. These values decide how wide the chart is.
This tool will make a chart that helps you understand the formulas better.
• Linear Function: A linear function can be represented as y = mx + b where m is the slope of the line and b is the y-intercept. To graph a line with a slope of 2 and a y-intercept of 1, enter 2*x
+ 1 into the text area.
• Quadratic Function: A quadratic function has the form y = ax² + bx + c. For example, to plot the function y = x² - 5x + 6, enter x^2 - 5*x + 6.
• Cubic Function: A cubic function is of the form y = ax³ + bx² + cx + d. For instance, to plot the function y = 2x³ - x² + 3x - 1, enter 2*x^3 - x^2 + 3*x - 1.
• Exponential Function: An exponential function can be written as y = a*b^x. For example, to plot y = 2*3^x, enter 2*3^x.
• Logarithmic Function: A logarithmic function has the form y = a * log(b) (x). For instance, to plot y = log(2) (x), enter log(x, 2).
• Sine and Cosine Functions: To plot sine and cosine functions, enter sin(x) or cos(x) respectively. For their variations such as amplitude or phase shift changes, adjust the formula accordingly.
For example, for a sine wave with amplitude 2 and a phase shift of π/2, enter 2*sin(x + pi/2).
Remember, each formula should be on its own line in the text area. This way, you can graph multiple formulas at once. Just press “Enter” to create a new line for each formula. Be sure to replace any
multiplication symbols with * and division symbols with /.
Please note that the ‘x’ in the formulas is case sensitive and should be in lowercase.
You may want to try these instant chart/graph builders:
Side by Side Comparison Bar Chart Builder
Instant-Interactive Math Graph Generator
List to bubble chart generator
CSV to Chart Online
Text List to Chart
List to bubble chart generator
Usage: x axis, y axis, radius of a buble
Chart will appear here
Did you love the function? Bookmark This Page for easy access next time.
The List to Bubble Chart Generator tool is an online service that lets you transform simple lists into engaging, informative bubble charts. A bubble chart might be useful in many instances – for
instance, if you want to compare different data sets and see the relations between them.
In the case of the default example given, “TV Sales” and “Smartphone Sales”, you can input data for each category, specifying the x axis-coordinate, y axis-coordinate, and radius(size) of each bubble
(representing a particular data point). Each line in the list represents a different bubble, with the category specified at the start.
The tool reads the data from your list, recognizes the different categories and their values, and converts this information into a visually intuitive bubble chart. Each category is presented with a
different color for easy differentiation, colors are chosen randomly.
The x-coordinate and y-coordinate values position the bubbles on the chart, and the radius determines the size of each bubble. The larger the radius, the larger the bubble.
No technical skills or coding knowledge are required. And everything runs inside your browser, no any data exchange with external server.
You just need to enter the data in the correct format, and our tool will handle the rest instantly.
It’s a straightforward, practical way to generate informative charts from your data lists. Once the chart is generated, you can easily download it for use in reports, presentations, or simply to
better visualize and understand your data.
You may want to try these instant chart/graph builders:
Side by Side Comparison Bar Chart Builder
Instant-Interactive Math Graph Generator
List to bubble chart generator
CSV to Chart Online
Text List to Chart
|
{"url":"https://list2chart.com/category/special-purpose-charts/","timestamp":"2024-11-06T12:16:08Z","content_type":"text/html","content_length":"114798","record_id":"<urn:uuid:6db21e7a-7b29-415e-b8fa-8ceac1b7a6b6>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00160.warc.gz"}
|
Predicting the rate of inbreeding in populations undergoing four-path selection on genomically enhanced breeding values
Deterministic predictions of response to multi-trait genomic selection in a single generation in a population with four-path programs, was developed [
]. That is, the selection paths in four-path programs are sires to breed sires (SS), sires to breed dams (SD), dams to breed sires (DS), and dams to breed dams (DD). However, when creating formulas
for calculating the asymptotic response to index or single-trait selection in four-path selection programs rather than in a single generation, the initial genetic response in generation 0
overestimated the asymptotic response due to the decrease in equilibrium genetic variance from generation 0 onwards [
]. Consequently, to safeguard the genetic variation of the population over the long term, the rate of inbreeding needs to be restricted to an acceptable level. Therefore, one needs to know the
expected rate of inbreeding as well as the equilibrium genetic response before choosing a breeding scheme.
A population with discrete generations under mass selection in a four-path selection program is modeled to predict the rate of inbreeding in the long term. When sires in the SS path are used with
constant selection intensity and in equal number throughout the usage period of several years, every SS sire belongs to a single or exclusive category. Similarly, SD, DS, and DD parents each belong
to a single or exclusive category when they are used with constant selection intensity and in equal numbers over several years. Consequently generations can be regarded as discrete rather than
overlapping. A formula is needed that is practical for current livestock breeding methods and that predicts the approximate rate of inbreeding (ΔF) in populations where selection is performed
according to four-path programs.
The rate of inbreeding is proportional to the sum of squared long-term genetic contributions [
]. General predictions of expected genetic contributions were developed by Woolliams et al [
] by using equilibrium genetic variances instead of second-generation genetic variances. Methods were developed by Bijma and Woolliams [
] to predict rates of inbreeding in populations selected on breeding values according to best linear unbiased prediction (BLUP) [
]. A formula was developed for predicting the rate of inbreeding in four-selection path programs [
]; however, this formula ignored the effect of selection. The purpose of the current study was to develop a formula for predicting the rate of inbreeding in four-path selection programs that
incorporated the effect of selection and was practical for use under real-life conditions of cattle breeding.
Prediction of expected long-term genetic contributions
Our prediction method is based on the concept of long-term genetic contributions. The long-term genetic contribution of individual i (
) in generation t
is defined as the proportion of genes from individual i that are present in individuals in generation t
deriving by descent from individual i, where (t
) →∞ [
]. That is, after several generations, the genetic contributions of ancestors stabilize and become equal for all descendants, i.e., the ultimate proportional contribution of an ancestor to its
descendants is reached.
Selection is performed in four categories of selection path (SS, SD, DS, and DD). Rates of inbreeding can be expressed in terms of the expected contributions of these categories [
where 1′ = (1 1 1 1), N is a 4×4 diagonal matrix containing the number of selected parents for element (i, i) as N[i,i], N[1,1] is the number of sires in SS and is referred to as N[SS], N[2,2] is the
number of sires in SD and is referred to as N[SD], N[3,3] is the number of dams in DS and is referred to as N[DS], and N[4,4] is the number of dams in DD and is referred to as N[DD]. In addition, u2=
(ui,SS2ui,SD2ui,DS2ui,DD2), where u[i,SS] is the expected lifetime long-term genetic contribution of individual i in category SS conditional on its selective advantage (which in mass selection is
the genomically enhanced breeding value [GEBV]), and u[i,SD], u[i,DS], and u[i,DD] are the expected lifetime long-term genetic contributions of individual i in categories SD, DS, and DD,
respectively. Furthermore, δ = (δ[SS] δ[SD] δ[DS] δ[DD]), where δ[SS] is the correction factor for deviations of the variance of family size from independent Poisson variances in the selected
offspring from sires in SS; δ[SD], δ[DS], and δ[DD] are corrections for deviations of the variance of the family size from independent Poisson variances in the selected offspring from parents in SD,
DS, and DD, respectively.
The selective advantage of the ith sire in SS (S[i,SS]) and in SD (S[i,SD]) in the linear model is:
where A[i,SS] is the breeding value of sire i in SS or SD, Ā[i,DS and DD] is the average breeding value of dams mated to the ith sire in SS and SD, respectively; the dams mated to the ith sire in SS
belong to the DS category, and the dams mated to the ith sire in SD belong to the DD category; and Ā[i,SS], Ā[i,SD], Ā[i,DS], and Ā[i,DD], are the average breeding values of the individuals in the
SS, SD, DS, and DD categories.
The selective advantage of the ith dam in DS (S[i,DS]) and in DD (S[i,DD]) in the linear model is:
where A[i,DS and DD] is the breeding value of dam i in DS and DD, respectively; A[i,SS and SD] is the breeding value of a sire mated to the ith dam in DS and DD, respectively; the sires mated to the
ith dam in DS belong to the SS category; and the sires mated to the ith dam in DD belong to the SD category.
Expected contributions (u[i,SS,SD,DS,or DD]) are predicted by linear regression on the selective advantage. That is,
is the expected contribution of an average parent in
, and
is the regression coefficient of the contribution of i on its selective advantage (
). In addition,
can be obtained according to Woolliams et al [
where G is a 4×4 matrix representing the parental origin of genes of selected offspring in the order of SS, SD, DS, and DD category, i.e., representing rows offspring and columns parental categories.
That is,
is the left eigenvector of
with eigenvalue 1; the left eigenvector is obtained according to Bijma and Woolliams [
] and is equal to (0.25 0.25 0.25 0.25).
Solutions for
are obtained according to Woolliams et al [
note that the right hand side of (
) is unaffected by the number of parents, so that
is inversely proportional to the number of parents (that is,
), where,
is a 4×4 identity matrix,
is a 4×4 matrix of regression coefficients with
being the regression coefficient of
of a selected offspring i of category
(SS, SD, DS, DD) on
of its parent j of category
(SS, SD, DS, DD). For example,
is the regression coefficient of
of a selected offspring i of SD on
of its parent j of SS. Given that SS is the sires to breed sons category, we have non-zero elements,
, in
as elements (1,1) and (2,1), respectively. In the same way, since SD is the sires to breed daughters category, we have non-zero elements,
, in
as elements (3,2) and (4,2), respectively. Because DS is the dams to breed sons category, we have non-zero elements,
, in
as elements (1,3) and (2,3), respectively. And given that DD is the dams to breed daughters category, we have non-zero elements,
, in
as elements (3,4) and (4,4), respectively.
In addition, Λ is a 4×4 matrix of regression coefficients, with λ[xy] being the regression coefficient of the number of selected offspring of category x on S[j,y] of its parent j of category y. In
the same way as Π, we have non-zero elements, λ[SS,SS] and λ[SD,SS], λ[DS,SD] and λ[DD,SD], λ[SS,DS] and λ[SD,DS], and λ[DS,DD] and λ[DD,DD] in Λ as elements (1,1) and (2,1), (3,2) and (4,2), (1,3)
and (2,3), and (3,4) and (4,4), respectively. Consequently,
representing rows as offspring and columns as parental categories.
In our current study, elements in matrices
were calculated from Woolliams et al [
] and Bijma and Woolliams [
], as outlined in
Appendices A and B
The sires in the SS category are included among the sires in SD category. That is, the sires in the SS category are selected not only to breed sons but as sires in the SD category to breed daughters.
Similarly the dams in the DS category are included among the dams in the DD category. The dams in the DS category are selected not only to breed sons but as dams in the DD category to breed
daughters. Therefore, after applying the procedure of Bijma and Woolliams [
], the number of sires in SD is larger than that of sires in SS, and the number of dams in DD is larger than that of dams in DS. Therefore,
, where
E denotes the expectation with respect to the selective advantage,
and E(Ā[DS] – Ā[DD]) = (i[DS] – i[DD])σ[A,f],
note that variance of selective advantage (
, and
) is not affected greatly by the number of parents (
, and
), since the term of
is adjustment for finite population size, where
are the equilibrium genetic variance in the male and female populations, respectively;
are the equilibrium reliability of GEBV in the male and female populations, respectively; and
, and
are variance reduction coefficients for offspring selection in SS, SD, DS, and DD, respectively. Note that covariances of mates between SS and SD and between DS and DD are zero, because of random
mating. General predictions of expected genetic contributions was developed using equilibrium genetic variances instead of second generation genetic variances [
]. Therefore, variances thereafter refer to those in equilibrium.
The accounting percentage derived from SS, SD, DS, and DD for the rate of inbreeding (ΔF) is obtained,
When the effect of selection on inbreeding is ignored, i.e., β = 0, E(ΔF)=12(1′N0U01)=132(1NSS+3NSD+1NDS+3NDD).
This result is in agreement with the formula from Gowe et al [
], which likewise neglects the effects of selection on ΔF.
Correction of E (ΔF) from Poisson variances
The correction for deviations of the variance of the family size from independent Poisson variances in the selected offspring from SS, SD, DS, and DD parents, i.e., δ
, δ
, δ
, and δ
, can be approximated by Woolliams and Bijma [
According to Woolliams and Bijma [
ΔV[SS], ΔV[SD], ΔV[DS]
, and
, are 4×4 matrices which are variances of selected family size deviated from Poisson variance by applying binomial distribution to the family size from the parents of SS, SD, DS, and DD,
respectively, and
is the selective advantage of parents in category
(SS, SD, DS, DD). Elements of
ΔV[SS,SD,DS,or DD]
are shown in
Appendix C
Example applications of the formula
To demonstrate the application of our formula, we assumed only two quantitative traits: trait 1 was assumed to be moderately heritable, with h^2 = 0.3, whereas trait 2 was assumed to have low
heritability, with h^2 = 0.1. These traits are selected as single traits expressed as GEBV. Furthermore, we assumed an aggregate genotype as a linear combination of genetic values, each weighted by
the relative economic weights, which was expressed as a[1]g[1] + a[2]g[2], where g[1] is the true genetic value for trait i, a[i] is the relative economic weight for trait i, and the genetic
correlation between traits 1 and 2 was assumed as 0.4. Index selection was performed to select a[1]g[1] + a[2]g[2], that is, breeding goal (H), under the assumption that the relative economic weight
between traits 1 and 2 is 1:1. Breeding value (A) was defined as described earlier in the Methods; for example, the breeding value of sire i in SS was defined as A[i,SS]. Similarly, the breeding goal
value (H) of sire i in SS can be expressed as H[i,SS]; note that the formula that we developed in Methods can be applied not only to breeding value (A) but also to breeding goal value (H).
In our example, we assumed the reliabilities of the GEBVs for traits 1 and 2 to be 0.5721 and 0.4836, respectively [
]. Index selection (
) was performed as
a[1] GEBV[1]
a[1] GEBV[2]
, because GEBVs are assumed to be derived from multiple-trait BLUP (MT BLUP) genetic evaluation methods in the current study (as done for single-step genomic BLUP [
]). We calculated equilibrium genetic variances and reliabilities based on Togashi et al [
]. The initial (generation 0) and equilibrium genetic variances and reliabilities for single-trait selection (
= 0.3 or
= 0.1) and index selection are shown in
Table 1
. Rates of inbreeding were calculated based on equilibrium genetic variances and reliabilities, because regression coefficients of the number or breeding value of selected offspring on the breeding
value of the parent are equal for the parental and offspring generations under equilibrium genetic variances and reliabilities.
We considered two scenarios for the selection percentages for SS, SD, DS, and DD—5%-12.5%–1%-70% and 1%-5%–1%-70%—and three scenarios for the numbers of selected parents of SS, SD, DS, and DD—namely
20-50–100-7,000, 40-100–200-14,000, and 60-150–300-21,000. Therefore, we considered six scenarios (two scenarios of selection percentage and three scenarios of the number of parents in SS, SD, DS,
DD) in total. Note that the two scenarios for selection percentage for SS, SD, DS, and DD differ only in the selection percentage along the SS and SD selection paths, because under actual breeding
conditions, selection intensity can be adjusted more easily in male selection paths (SS and SD) than in female selection paths (DS and DD). The numbers of male and female offspring from a dam of DS,
i.e., fmds and ffds, were set at 4. The number of female offspring from a dam of DD, i.e., ffdd, was set at 1.4. These numbers are derived from the years of usage of a dam and the reproduction method
(ovum collection, in vitro fertilization, or embryo transfer). When DS and DD parents are used with constant selection intensity and in equal numbers over several years, they belong to a single or
exclusive category. The numbers are used to compute the deviation of the variance of the family size from Poisson variance.
Rates of inbreeding
The rates of inbreeding without correction for deviation from Poisson variances (that is, the rates of inbreeding with Poisson family size) are shown in
Table 2
. Because the rates from Gowe et al [
] do not account for selection, ΔF is the same between two selection percentages in SS, SD, DS, and DD, i.e., 1%-5%–1%-70% and 5%-12.5%–1%-70%. In contrast, ΔF derived from the method developed in
the current study increased with the increase in selection intensity. When we applied our formula, ΔF was lower when selection was ignored than when it was included, suggesting that ΔF was
underestimated when selection was ignored. The ratio of ΔF when selection was ignored to that when it was included was about 0.61 under the selection percentages of 5%-12.5%–1%-70% for the SS, SD,
DS, and DD selection paths, whereas the ΔF ratio was 0.53 to 0.56 under the selection percentage condition of 1%-5%–1%-70%. That is, calculation according to Gowe et al [
] underestimated ΔF by approximately 40% and 45% under selection percentages of 5%-12.5%–1%-70% and 1%-5%–1%-70% for the SS, SD, DS, and DD selection paths, respectively. In contrast, the rates of
inbreeding under selection estimated by using our formula were 63% to 87% greater than those calculated according to the current working formula, which does not consider selection [
]. The ratio of ΔF for 5%-12.5%–1%-70% to that for 1%-5%–1%-70% was 0.88 to 0.89, resulting in an approximately 12% decrease in ΔF due to increasing the selection percentage or decreasing the
selection intensity for SS and SD for all three scenarios compared in the numbers of parents in SS, SD, DS, and DD (20-50–100-7,000, 40-100–200-14,000, and 60-150–300-21,000). In contrast, the
decrease in ΔF due to the increase in the number of parents was proportional to the numbers. The ΔF under the number of parents in SS, SD, DS, and DD (40-100–200-14,000 and 60-150–300-21,000) was
approximately half and one third of the ΔF under the number of parents (20-50–100-7,000), respectively, for all two scenarios compared in the selection percentage of parents in SS, SD, DS, and DD
(5%-12.5%–1%-70% and 1%-5%–1%-70%). Consequently, the decrease in the rate of inbreeding likely would be greater with an increase in the number of parents than with a decrease in selection intensity;
however, we need to perform more trials at different selection intensities to confirm this association.
In general, both genetic gain and ΔF increase with an in crease in selection intensity. However, because the number of parents has a greater effect on inbreeding than does selection intensity,
increasing the number of parents is one option for offsetting the increase in ΔF due to an increase in selection intensity.
The rate of inbreeding was slightly lower in single-trait se lection with a low heritable trait (
= 0.1) than the other selection methods (i.e., single-trait selection with a trait (
= 0.3) and index selection [
Table 2
]). However, the difference was not so remarkable. Consequently, we consider the major factors in the rate of inbreeding to be the number of parents and the selection intensity in each of the four
selection paths.
Values for effective population size expressed as
are shown in
Table 3
. Using a method that ignores selection [
] overestimated the effective population size due to ΔF compared with that computed by using our formula, which accounts for selection. The overestimation was greater when the selection percentage in
SS, SD, DS, and DD was 1%-5%–1%-70% than when it was 5%-12.5%–1%-70%. The ratio of NE for the 5%-12.5%–1%-70% condition to that for 1%-5%–1%-70% became greater as the numbers of parents in SS, SD,
DS, and DD increased from 20-50–100-7,000 to 40-100–200-14,000 and then to 60-150–300-21,000. This pattern is consistent with the suggestion that increasing the number of parents is one option for
offsetting an increase in ΔF due to an increase in selection intensity (
Table 2
). That is, decreasing ΔF is equivalent to increasing the effective population size.
The expectation of the square of long-term contribution of an individual (that is,
, and
) in SS, SD, DS, and DD are shown in
Table 4
. The expectation of the square of long-term contribution of an individual was the greatest in SS of all the four selection paths (SS, SD, DS, and DD), since selection intensity is the highest and
the number of parents is the smallest of all the four selection paths. On the contrary, the square of long-term contribution of an individual was the smallest in DD of all the four selection paths,
since selection intensity in DD is the lowest and the number of parents is the largest of all the four selection paths. The square of long-term contribution of an individual in SD was greater than
that in DS, mainly because the number of parents in SD is smaller than those in DS. With the increase in selection intensity or decrease in selection percentage in the four selection paths
(SS-SD-DS-DD), i.e., from 5%-12.5%–1%-70% to 1%-5%–1%-70%, the increase in the square of long-term contribution of individuals in SS and DS was greater than that in SD and DD, because the selective
advantage of an individual in DS was the sum of its breeding value and the breeding value of its mate in SS category with the greatest long-term contribution of all the four selection paths. The
increase in the number of parents decreased the square of long-term contribution of an individual in SS, SD, DS, and DD, because the expected contribution of an average parent (
) in each of the four selection paths decreased with the increase in the number of parents. The square of long-term contribution of an individual was slightly lower in single-trait selection with a
low heritable trait (
= 0.1) than the other selection methods (i.e., single-trait selection with a trait (
= 0.3) and index selection). However, the difference was not so remarkable in all selection methods (single-trait selection with a trait (
= 0.1 or 0.3) and index selection), which was consistent with the trend that the rate of inbreeding was almost the same in all selection methods (
Table 2
The accounting percentage derived from SS, SD, DS, and DD for the rate of inbreeding (ΔF) when the numbers of parents in SS, SD, DS, and DD are 40-100–200-14,000 is shown in
Table 5
. The accounting percentage in SS was the greatest of all the four selection paths for all two scenarios compared in the selection percentage in SS, SD, DS, and DD (that is, 5%-12.5%–1%-70% and
1%-5%–1%-70%), because the expectation of the square of lifetime long-term contribution of an individual was the greatest in SS of all the four selection paths (
Table 4
). The sum of accounting percentage in SS and SD was approximately 90% for ΔF, because the number of male parents in SS and SD was smaller than that of female parents in DS and DD and selection
intensity in male parents is generally higher than that in female parents. In addition, the accounting percentage in each of the four selection paths when the numbers of parents in SS, SD, DS, and DD
were 40-100–200-14,000 (
Table 5
) was approximately the same as the other scenario when the numbers of parents in SS, SD, DS, and DD were 20-50–100-7,000 or 60-150–300-21,000, although the accounting percentage in SS, SD, DS, and
DD in the other scenarios was not shown. This is mainly because the expected contribution of an average parent (
) and the regression coefficient of the contribution of an individual on its selective advantage (
) are inversely proportional to the number of parents as explained previously in
equation (1)
. Consequently, the accounting percentage derived from SS, SD, DS, and DD for the rate of inbreeding (ΔF), (that is, the relative magnitude of ΔF in SS, SD, DS, and DD), resulted in almost the same
for all three scenarios compared in the numbers of parents in SS, SD, DS, and DD, even if the absolute magnitude of ΔF derived from each of the four selection paths differed in the number of parents
in each of the four selection paths.
Correction derived from deviation from Poisson variance
Corrections for deviations in the variance of the family size from independent Poisson variances (×10
) approximated by binomial distribution are shown in
Table 6
. The magnitude approximated by binomial distribution under the assumed selection percentages in the SS, SD, DS, and DD selection paths of 5%-12.5%–1%-70% and 1%-5%–1%-70% varied from −0.29×10
to −0.88×10
, and −0.04×10
to −0.12×10
, respectively. In comparison, the rates of inbreeding with Poisson family size without correction shown in
Table 2
varied from 0.2×10
to 0.7×10
. Therefore, because the magnitude of correction was much smaller than that of the rates of inbreeding with Poisson family size without correction, correction is unnecessary; thus the rates of
inbreeding without correction (
Table 2
) are reasonable rates of inbreeding. However, the method in terms of the factorial moments [
] should be examined to confirm that the magnitude of correction is much smaller than those of ΔF with Poisson family size without correction.
Selection intensities and variance reduction coefficients should be adjusted by using the procedure from Wray and Thompson [
] in situations of few families with numerous candidates per family, for example, when the number of selected parents is only 5 or 10 [
]. Because we set the number of parents in SS at 20, 40, and 60, we did not adjust the selection intensity in the SS path. In addition, selection intensity in DD generally is much smaller than those
in SS, SD, and DD selection paths. Consequently, when selection in DD is not performed, the selection intensity and reduction factor of the variance need to be set at zero in the DD selection path in
the formula developed in the current study.
We here developed a formula for calculating the rates of inbreeding in populations under selection based on GEBV. The population is selected along the four selection paths of SS (sires to breed
sons), SD (sires to breed daughters), DS (dams to breed sons), and DD (dams to breed daughters). Assuming that the number and selection intensity of parents remained the same over the period of usage
(several years) enabled us to regard generations as discrete generations. The effect on decreasing the rate of inbreeding was greater when the number of parents was increased than when the selection
intensity was decreased, and both number of parents and the selection intensity in four-path selection emerged as major factors affecting the rate of inbreeding. In general, both genetic gain and ΔF
tended to increase in line with any increase in selection intensity. Therefore, increasing the number of parents is one option for offsetting the increase in ΔF due to an increase in selection
intensity. Especially, increasing the number of male parents would be effective, since the accounting percentage for the increase in ΔF from male parents is greater than that from female parents.
When applied without correction for deviation of family size from Poisson distributions, the formula we developed here would be highly useful as a practical method for predicting the approximate rate
of inbreeding (ΔF) in populations where selection is performed according to four-path programs.
|
{"url":"https://www.animbiosci.org/journal/view.php?number=24799","timestamp":"2024-11-10T02:00:05Z","content_type":"application/xhtml+xml","content_length":"240139","record_id":"<urn:uuid:d5e98229-d4ba-4011-bedb-2dc1a2d58853>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00221.warc.gz"}
|
Spreading of n Cooperative Pathogens: Exact Solution of Meanfield Approximations
Infectious diseases are of the most fatal threats in human history[1]. Many studies have been made to investigate the way these diseases become epidemic and how it is possible to prevent them from
spreading to the whole society. Of the most interesting properties of the phenomenon is the outbreak of the disease which resembles a phase transition. There are some examples of coinections that two
or more pathogens cooperate in infecting individuals; i.e. if one is infected by one disease, the chance to become infected by the second one will be much more. Recently such phenomenon has been
modeled using two interacting SIR model and discontinuous transitions, in contrast to single SIR dynamics, were observed [2]. However the meanfield approach equations were only solved numerically and
the insight for how the transition is occurred was not complete. Here we solve the equations analytically and show that the transition occurs when something like a saddle node bifurcation happens in
the system (see figure 1). Through this analytical solution we can derive the exact diagram of the order parameter as a function of infection rates. Though the solution could be expressed in terms of
the roots of a non-algebraic equation, in certain limits we can provide simple, yet relatively accurate expression for the order parameter as function of infection rates.
Figure 1: The transition occurs when two roots of X(T)=0 vanish.
We have also generalized the above model to the case that we have n interacting SIR dynamics.
Where is the fraction of agents that have experienced exactly diseases and is the fraction of agents that has one particular disease. Again the meanfield equations can be solved analytically and with
usual initial condition the same phenomenon arises.
Tuesday, September 25, 2018 - 18:30 to 18:45
|
{"url":"http://ccs2018.web.auth.gr/spreading-n-cooperative-pathogens-exact-solution-meanfield-approximations","timestamp":"2024-11-02T20:17:51Z","content_type":"text/html","content_length":"27294","record_id":"<urn:uuid:6467e42a-7173-43b6-802b-7880e28b29dd>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00176.warc.gz"}
|