content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
River Crossings
Why we like this activity
1. Combining story telling with preposterous scenarios (and the potential to create your own) makes for a great problem for engaging students.
2. As students reason through the possible solutions, they develop their communication and logical reasoning skills.
3. The activity is low-floor in that every student can get started by the simple act of trying something, yet gets progressively more difficult.
4. Students are naturally pushed towards developing their own notation structures that help them strategize and discover patterns.
Why do YOU like this activity? Try it out and let us know at info@jrmf.org!
Connection to Math Standards
Make sense of problems and persevere in solving them.
River crossings is a low-floor problem, in that any student can get started by the simple act of trying something and seeing what happens. And then trying something else. This allows the student to
focus on the mathematical process and helps develop the problem-solving skill of looking ahead, which is the ability to anticipate the results of the next few steps. From there, the students can
start thinking about the solution as a pathway and think about what might hinder or help their progress. And when they successfully solve one problem, there is always another problem to challenge and
engage them.
Construct viable arguments and critique the reasoning of others.
This activity is great for getting students to explain their thinking because it’s framed as a story. This provides starting points for their arguments like “We need to bring the wolf, because…” or
“We can’t leave the cabbage, because three moves from now…”, which encourages them to practice their mathematical communication skills.
Model with mathematics.
This idea of working with constraints is an important idea in mathematics and river crossing puzzles make readily apparent the need to develop an organized and readable way of representing them. This
notation will vary between students but they should be encouraged to develop some kind of tracking system that makes sense to them. It will also help with logical reasoning as well as students
explain their model.
Look for and make use of structure.
Tracking their moves helps here as students may recognize cases or dead ends they have already encountered. This helps them move from trial and error and into sense-making and pattern recognition. It
provides the tool to help them reason through more challenging problems.
Connection to Math
Problem History
River crossing puzzles are a genre of logic puzzles often attributed to Alcuin of York (735-804) although versions of this puzzle have been found throughout Europe and Africa. In our sequences, we
have retained two of the classics – wolf-goat-cabbage and adult-child puzzles – and added two more. Zombies and Humans is our spin on the dated cannibal-missionary puzzles and Monsters is our fun,
novel puzzle that adds an arithmetic aspect.
Mathematical Theory
State Transition Graphs
At the heart of every river crossing puzzle is a collection of legal arrangements (or states) and transitions from one state to another. A natural way to represent a puzzle is as a graph, taking
these states as vertices and transitions as edges. In the classic wolf-goat-cabbage puzzle, there are 16 possible states, divided into legal and illegal:
We can then examine the 10 legal states and diagram the possible transitions:
The far left vertex is the starting state and the far right is the desired end state, so solving the puzzle now amounts to finding a path from the start to end following the edges. We can see there
are two shortest paths (the upper path and the lower path), each requiring 7 crossings.
Vertex Covers
In a generalized version of the wolf-goat-cabbage problem, imagine that you have a large collection of objects where each pair can either be left together unsupervised or not. If there are n
objects, then the puzzle can always be solved if there are n spaces in the boat, since the ferry person can keep an eye on all of them with a single trip. However, the puzzle may not be solvable if
the boat only has room for a single item. There is a smallest boat that can be used to solve the puzzle, and the number of items this boat can ferry is called the Alcuin number. By the above
discussion, the Alcuin number is always between 1 and n, inclusive.
For more details on the history and theory behind River Crossings puzzles, see here!
|
{"url":"https://jrmf.org/puzzle/river-crossings/","timestamp":"2024-11-03T18:21:57Z","content_type":"application/xhtml+xml","content_length":"65454","record_id":"<urn:uuid:f984dc87-d0fb-4933-9f1c-d753d7d63422>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00568.warc.gz"}
|
Thermodynamic Cycles
Choose the best answer. It does not matter how many questions you miss. What matters is that you learn from any mistakes.
1. Suppose a cycle started at the red ball, and the cycle will end up at the red ball. Which variable will equal zero?
1. ΔU[by the gas]
2. Q[by the surroundings]
3. W[by the surroundings]
2. What sign is the net work[by the surroundings] for this cycle?
1. Positive
2. Negative
3. Zero
3. What is the sign of Q[by the surroundings] for this cycle?
1. Positive
2. Negative
3. Zero
|
{"url":"https://www.mrwaynesclass.com/thermo/quiz-reading/CycleQuiz.html","timestamp":"2024-11-14T17:34:52Z","content_type":"application/xhtml+xml","content_length":"44786","record_id":"<urn:uuid:142507cc-c8dd-4704-b11f-11a87f2103b9>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00418.warc.gz"}
|
Key Stage Wiki
Key Stage 2
Filtering is when you separate liquid from insoluble solids.
About Filtering
Filtering cannot separate anything dissolved in a liquid.
When you filter the solid will get stuck in the filter paper and the liquid will pass through.
You can separate mud from water in the puddle with filter paper and a funnel.
Key Stage 3
Filtration is the process of separating a mixture of a liquid and an insoluble solid.
About Filtering
Key Stage 4
Filtration is the process of removing an insoluble solid from a mixture with a liquid.
About Filtration
In an experiment a mixture of insoluble solid and a liquid is filtered using filter paper. However, on an industrial scale many different types of filter can be used to separate insolubles solids
from mixtures with liquids.
Filtration can only be used for:
Filtration cannot be used for:
|
{"url":"https://keystagewiki.com/index.php/Filtration","timestamp":"2024-11-08T10:54:18Z","content_type":"text/html","content_length":"32069","record_id":"<urn:uuid:38d9bd12-2750-4aa1-9e63-78d516c70bbc>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00366.warc.gz"}
|
Multiplication Minute Tests
Multiplication Minute Tests
Mad minute multiplication and times tables tests in google quizzes for free. Have your homeschoolers or students try and answer all the questions in a minute for these multiplication quizzes. Each
quiz is in google quiz or printable format. Free 3rd to 4th grade multiplication worksheets in printable PDFs and in google quizzes.
All worksheets are created by experienced and qualified teachers. Send your suggestions or comments.
|
{"url":"https://worksheetplace.com/index.php?function=DisplayCategory&showCategory=Y&links=2&id=116&link1=12&cn=Multiplication_Tables_In_Google_Quizzes&link2=116&cn=Multiplication_Minute_Tests","timestamp":"2024-11-08T08:22:14Z","content_type":"text/html","content_length":"20609","record_id":"<urn:uuid:5b70df60-d3cb-42e6-b1ec-b0aeba086f64>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00013.warc.gz"}
|
Draw A Line Segment Of Length 6.3 Cm
Draw A Line Segment Of Length 6.3 Cm - Web draw a line segment of length 6.3 cm & divide it in the ratio 3:4. Steps of construction are as follows: Swastik7869 is waiting for your help. Web click
here👆to get an answer to your question ️ draw a line segment of length 6.3 cm & divide it in the ratio 3:4. Using a ruler and compasses draw the line segments of.
Mark 13 (8+5 = 13) equidistant points. Draw another line parallel to it at a distance of 3 cm from it. Web solution the steps of construction to divide a line segment of length 6 cm in the ratio
2:3:4 are as follows: The parallel lines can be defined as two or more lines that lie in the same plane and never intersect or meet each other are known as parallel lines. Draw another line parallel
to it at a distance 3cm on from it. Draw a line segment of any length. Web to draw a line segment of length 6.3 cm and then another line parallel to it at a distance of 3 cm, you can follow these
steps using a compass:
Methods to Draw a Line Segment Examples, FAQs
Web solution verified by toppr given that, draw a line segment =ab=6.3cm take compass and take radius =3cm place the compass at fand cut it at d. Jun 16, 2023 filo tutor. Web click here👆to.
Draw a line segment PQ of length 6.3 cm.Take a point R on PQ such that
Web draw a line segment abusing a ruler and compass, obtain a line segment of length 43 (ab). It is as well and… (1) select the 'segment with given length from. Draw another line parallel.
Draw a line segment of length 6.3 CM Brainly.in
2) at b, draw by. The parallel lines can be defined as two or more lines that lie in the same plane and never intersect or meet each other are known as parallel lines. Draw.
draw a line segment AB=6.3 cm.construct its perpendicular bisector
Steps of construction are as follows: Question prow a line segment of length 7.2 cm and divide it in the ratio 5:3. Draw a line segment of length 6.3 cm. Web to draw a line.
Measuring Line Segments Formula, Method & Solved Examples Embibe
Web click here👆to get an answer to your question ️ draw a line segment of length 6.3 cm & divide it in the ratio 3:4. Draw another line parallel to it at a distance of.
How to Draw a Line Segment Miller Havol1970
Draw a perpendicular using protector / ( perpendicular bisector by taking any. Then, draw a ray ac having an acute angle with line segment ab. We are going to divide it into two parts. We're.
Draw a line segment of length 6.3 cm divide it internally in the ratio
Web find an answer to your question draw a line segment pq of length 6.3 cm and take a point r on pq such that pr is 3 cm. Web draw a line segment abusing.
Methods to Draw a Line Segment Examples, FAQs
Draw rs perpendicular to pq. Web draw a line segment of length 6.3 cm & divide it in the ratio 3:4. Web answer answered 3. We're going to use the ruler and draw a line.
Line segmentDefinition, Formula & Examples Cuemath
Mark 13 (8+5 = 13) equidistant points. Mark one end of the line segment as point a and the other end as point b. Draw another line parallel to it at a distance 3cm on.
Draw a line segment AB of length 6 cm Locate a point P on the line
Mark one end of the line segment as point a and the other end as point b. Place the compass at eand cut. Web click here👆to get an answer to your question ️ draw a.
Draw A Line Segment Of Length 6.3 Cm Draw rs perpendicular to pq. Web draw a line segment of length 6.3 cm. Web answer answered 3. Web find an answer to your question draw a line segment pq of length
6.3 cm and take a point r on pq such that pr is 3 cm. From point a, draw a ray in any.
Draw A Line Segment Of Length 6.3 Cm Related Post :
|
{"url":"https://classifieds.independent.com/print/draw-a-line-segment-of-length-6-3-cm.html","timestamp":"2024-11-04T23:53:18Z","content_type":"application/xhtml+xml","content_length":"23402","record_id":"<urn:uuid:df2783c8-da77-409d-9892-a9a80f12436b>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00413.warc.gz"}
|
How to define polynomial p(x_i, x_j) while x_i, x_j runs over available variables?
How to define polynomial p(x_i, x_j) while x_i, x_j runs over available variables?
Let's say I have variables x_1, x_2, ..., x_d, where d is some integer. Let's say I have a polynomial p(a,b) defined. How could I get a list of p(x_i, x_j) where i, j runs over 1, 2, ..., d, possibly
with some other qualifiers (i.e. we must have i inequal to j)? I want to define an ideal in this way, but when I get a list I know how to proceed.
Thank you in advance!
1 Answer
Sort by » oldest newest most voted
I suppose our multivariate polynomial ring R is defined as follows:
sage: R = PolynomialRing(QQ, 'x_', 3)
sage: x = R.gens()
Now it depends a little bit on how your bivariate polynomial is defined. Is it a Python function? A Sage callable symbolic expression? An element of a polynomial ring? An element of the symbolic
If it is a Python function:
sage: p = lambda a,b: a^2 + b^2
sage: [p(x[i], x[j]) for i in range(len(x)) for j in range(len(x)) if i != j]
If it is a Sage callable expression:
sage: p(a,b) = a^2 + b^2
sage: [R(p(x[i], x[j])) for i in range(len(x)) for j in range(len(x)) if i != j]
If it is an element of the symbolic ring:
sage: var('a,b')
sage: p = a^2 + b^2
sage: [R(p(a=x[i], b=x[j])) for i in range(len(x)) for j in range(len(x)) if i != j]
If it is a polynomial in a polynomial ring:
sage: S.<a,b> = PolynomialRing(QQ)
sage: p = a^2 + b^2
sage: P = p.change_ring(R)
sage: [P(x[i],x[j]) for i in range(len(x)) for j in range(len(x)) if i != j]
All give the same result, a list of elements of R:
[x_0^2 + x_1^2,
x_0^2 + x_2^2,
x_0^2 + x_1^2,
x_1^2 + x_2^2,
x_0^2 + x_2^2,
x_1^2 + x_2^2]
You can add other conditions in the list comprehension using and etc.
Also you can use itertools.product or itertools.combinations:
sage: p = lambda a,b: a^2 + b^2
sage: import itertools
sage: [p(x[i], x[j]) for (i,j) in itertools.product(range(len(x)),repeat=2) if i != j]
And you don't have to use indices if your condition doesn't depend on it:
sage: p = lambda a,b: a^2 + b^2
sage: [p(z,w) for (z,w) in itertools.product(x,repeat=2) if z != w]
edit flag offensive delete link more
|
{"url":"https://ask.sagemath.org/question/47167/how-to-define-polynomial-px_i-x_j-while-x_i-x_j-runs-over-available-variables/","timestamp":"2024-11-06T20:56:38Z","content_type":"application/xhtml+xml","content_length":"53885","record_id":"<urn:uuid:d207faa3-6e1b-4d43-a354-cf99bb3836cc>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00659.warc.gz"}
|
Modified Internal Rate of Return
The modified internal rate of return (MIRR), like the internal rate of return (IRR) is a measure of the return of an investment. MIRR assumes that all projects’ cash flows are reinvested at the cost
of capital of the company, while the regular IRR assumes that the cash flows are reinvested at the IRR of their own IRR (the IRR of the project). Since using cost of capital as the reinvestment rate
of the company is more accurate, the modified IRR can better indicate the project’s true profitability.
MIRR is defined as follows:
For example:
For example, say a two-year project with an initial outlay of $200 and a cost of capital of 10%, will return $125 in the year 1 and $135 in year 2. First, we find the IRR of the project so that the
net present value (NPV) = 0:
NPV = 0 = -200 + 125/(1+ IRR) + 135/(1 + IRR)^2 NPV = 0 when IRR = 19.15%
To calculate the MIRR of the project, we assume that the positive cash flows are reinvested at the 10% cost of capital of the company. So the future value of the positive cash flows is calculated
125(1.10) + 135 = 272.5 = Future Value of positive cash flows at t = 2
Next, we devide future cash flow values ($125) by the present value of the initial outlay ($200) and and find the geometric return for 2 periods. Note that if any outlays occured past the beginning
of the project, those outlays would have to be discounted to the present value using the IRR. In this case we only had one initial outlay of $200, therefore its present value is equal to $200.
=sqrt(272.5/200) -1 = 16.73% MIRR
In this instance, the 16.73% MIRR is lower than the IRR of 19.15%. In this case, the IRR shows an overly optimistic investment picture of the potential of the project, while the MIRR gives a more
realistic evaluation of the project.
When to use MIRR vs IRR
It depends on the project. If project cash flows can be reinvested into the project to produce the same return, than the IRR is a more accurate indicator of the return of the investment. If the
project cash flows cannot be reinvested back into the project, and are distributed back to the company, then the MIRR is a more accurate indicator of the return of the investment.
|
{"url":"https://valuationacademy.com/modified-internal-rate-of-return/","timestamp":"2024-11-07T13:13:50Z","content_type":"application/xhtml+xml","content_length":"19676","record_id":"<urn:uuid:01c67fec-c3d0-4f1f-ae36-82068475097f>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00357.warc.gz"}
|
Podcast: Targeting Cancer with 3D Modeling and Simulation - High-Performance Computing News Analysis | insideHPC
In this Engineering Out Loud podcast, Oregon State University Associate Professor Eugene Zhang and Assistant Professor Yue Zhang describe their research to help medical doctors better target
cancerous tumors by using 3D modeling and simulation.
The reason we are spending some time with meshes here is that from what I understand from Eugene, the mesh is really, really important in 3D modeling. The mesh is the mathematical bones onto
which everything else is applied. For applications like animations or art or architecture a good mesh will allow you to create more realistic images. But, perhaps, more importantly when 3D
modeling is used for scientific research such as simulations of tornados, earthquakes and tsunamis the results will be more accurate.
NARRATOR: From the College of Engineering at Oregon State University, this is Engineering out Loud.
ROBERTSON: When you think of engineering do you think of human and environmental health? Maybe not so much. So, for this season we are focusing on stories of how research in engineering at Oregon
State can impact broad areas such as cancer treatment, food contamination and the detection of nuclear weapons tests. We are going to start with the topic of 3D modeling and learn how it can advance
science in ways that you might not have imagined.
[AUDIO CLIP: from Toy Story]
Video games and movies are what we conjure up when we think of 3D modeling. But it’s also a tool that can help medical professional better target cancerous tumors. Today we’ll start by talking to
Eugene Zhang, a professor of computer science at Oregon State University and an expert in computer graphics and data visualization. And, of course, I start with the most important question first.
ROBERTSON: As far as a good example of animation, what is your favorite?
EUGENE ZHANG: Well my favorite is the Toy Stories, as well as Star Wars, I like Star Wars because, you know, my kids like them so I started to really get into them as well.
ROBERTSON: The Star Wars that Eugene is talking about is the animated TV series The Clone Wars from Lucasfilm Animation that ran from 2008 to 2014.
[MUSIC: Oceanside Drive, Ethereal Delusions, used with permission of the artist.]
One of the planets in this show, called Mustafar, is a volcanic planet that has constant eruptions flowing into rivers of lava. You’ll see why I mention that in a minute. I asked Eugene what has
changed to take us from very rudimentary 3D wire-frame graphics, such as the depiction of the Death Star plans in the first Star Wars movie in 1977, to the very sophisticated 3D images you see today.
E. ZHANG: Computer animation has matured as a field of research over the years and now we are talking about technologies that have been developed in the last 10-20 years with a strong focus on
numerical simulation. Like simulation of fluids. Flows, like the lava flows you see in Star Wars. There have been a lot of techniques to speed up the simulation that would have been impossible before
the GPU era.
ROBERTSON: The GPU era which stands for graphics processing unit that Eugene mentions had a breakthrough in 1995 when the first 3D add-in cards came out on the market.
E. ZHANG: With the increased computation capability we are now able to produce a movie at a much faster rate than say 30-40 years ago, for instance it might take 6 months to make another episode for
Star Wars.
ROBERTSON: So, how is what you do with 3D modeling related to what people see in the movies?
E. ZHANG: So, one of the things that a lot of people do not necessarily realize is that even day one computer graphics animation, 3D modeling has been an integral part of that. People see these
fascinating shapes, motions, but there has to be a way for them to be represented in the computer so that the artist can manipulate these shapes make them move, make them change the form and the more
efficient the representation, the easier it is for the artist to work with them. And also for scientific simulation like the simulation of lava it involves very sophisticated mathematical modeling
techniques that require very well-designed meshes.
ROBERTSON: Okay, so I’m going to jump in here and explain that a mesh made up of points in a 3D space called vertices, there are lines that connect these points, called edges. And the face is the
area between the edges.
E. ZHANG: So, in fact if you think about a simple case. Let’s say you are talking about a cube. A cube has 8 corners, and these corners will be the so called vertices and there are 12 edges
separating the 6 faces of the cube. So the surface of the cube consists of 6 faces would be exactly a mesh. And another example is to look at Spiderman. Spiderman has this very interesting sort of
mesh like pattern on the cloth. You can see points and lines and these lines intersect at a right angle and they form these rectangular patterns on the cloth of the Spiderman. That’s a simple version
of a mesh.
ROBERTSON: The reason we are spending some time with meshes here is that from what I understand from Eugene, the mesh is really, really important in 3D modeling. The mesh is the mathematical bones
onto which everything else is applied. For applications like animations or art or architecture a good mesh will allow you to create more realistic images. But, perhaps, more importantly when 3D
modeling is used for scientific research such as simulations of tornados, earthquakes and tsunamis the results will be more accurate.
E. ZHANG: In fact, it’s known in the community that 90 percent of the time that simulation researchers spend on performing a simulation was spent on generating a really good mesh. And only 10 percent
of the time was to actually run the simulation.
ROBERTSON: So, since Eugene works on perfecting these meshes, his work is fundamental to all 3D modeling and can be applied to any field. Even the modeling of complex internal organs. My next guest
will explain why she and Eugene would want to do that.
YUE ZHANG: My name is Yue Zhang. My research area is in numerical simulations and scientific visualizations.
ROBERTSON: So, Yue’s background is in mathematics, and along the way she discovered that what she really enjoys is collaborating with others to find mathematical solutions to real world problems.
Y. ZHANG: Looking at fast algorithms and also complex mathematical modeling is very interesting, very challenging. On the other hand the real-life problems can enrich these models even further,
because in real life there are always factors that need to be considered in the mathematical modeling.
ROBERTSON: So, that’s why she took the time to attend a mixer between researchers from Oregon State University and Oregon Health & Science University. The connections she and Eugene made there
eventually led to a collaboration with Dr. Wolfram Laub in the Department of Radiation Medicine. The problem that Laub wanted help with is better targeting radiation therapy for tumors associated
with prostate cancer. Yue explains why this is important.
Y. ZHANG: The difficulty is from the clinical side there is a safety margin because the radiation has toxicity and you can hurt the neighboring healthy organs. The larger the safety margin, the more
harmful it is for the neighboring organs and to reduce this safety margin so that only the tumor portion is treated with the right amount of dosage is what we are trying to help with.
ROBERTSON: Next Yue tells us why targeting the tumor at the right location can be trickier than you might think.
Y. ZHANG: Because the organs are actually moving, so the organ shapes and positions are very important, are critical because the patients can have some movement the organs and the geometry can
ROBERTSON: So, people might not understand that, so why … usually we think of our organs as fairly stable so why are they moving?
Y. ZHANG: Breathing, breathing and other bodily functions.
ROBERTSON: Think about that cup of coffee you had this morning. What’s going to happen to during a radiation therapy treatment that could take hours. Your bladder is going to fill up. And if there is
a tumor next to it, the location of the tumor will change as the bladder pushes it out of the way. And so, to create a simulation of how the organs might move and change the location of the tumor,
they first have to start with a 3D model constructed from the medical scans.
Y. ZHANG: The scans are 2D and they are taken at different slices on the human body so that we can construct a volume with a volume then we put mesh on this volume. Like what Eugene was saying we put
node, edges and cells and faces on this volume. That now we have…
[record scratch]
ROBERTSON: Okay, I’m going to stop the tape there a second to bring in a conversation with Eugene where he talks about adding volume to the mesh.
E. ZHANG: We have an additional element called the cells, for instance in this case I would go back to the cube example, in addition to the 8 corners, 12 edges and 6 faces you can also think of the
interior of the cube being a cell and if you put a number of cubes together you would get what is called a hexahedral mesh which means the mesh is made mostly of cubes, and in that case the number of
cubes would also be an indicator how complicated the mesh is. For instance, for the simulation that Yue deals with there are usually millions of cells that were involved to generate a realistic
ROBERTSON: And now back to Yue
Y. ZHANG: Now we have tools to describe material properties on these cells, so if it’s an organ that doesn’t move much it has a little bit rigid than we describe one material property, but if it is
something like the bladder, it’s very flexible, stretches a lot we use a different property to describe it.
[MUSIC: Oceanside Drive, Ethereal Delusions, used with permission of the artist.]
ROBERTSON: Now this is getting way more complex that a cool 3D animation. And it brings in another specialty of Eugene’s, called field processing which they are using to add information about the
material properties of the organs.
E. ZHANG: Field processing is a very new subfield of geometry processing. Instead of modeling a shape, now we are modeling things that are on the shape. It’s one thing to model the shape of the
earth, to model the mountains and the ocean and so on and so forth, but it’s another to model the magnetic field or the global ocean current flows on the earth and these are vector fields on the
surface that can provide a lot of insight into things such as the air stability, pollution and climate change. And tensors are an extension of vector fields whereas a vector field has a direction a
tensor field has multiple directions. So a tensor field is more complicated than the vector field and they can be only described mathematically by a matrix. So, you can sort of see the additional
complexity here.
ROBERTSON: Indeed, so now you have millions of cells with an additional layer of information about the material properties of the organs. So, you can see why it might take a couple weeks for the
simulation to run. But what field processing allows them to do is find the critical points where there is uncertainty in the model that can indicate change. Yue describes why this important for
Y. ZHANG: By looking at the simulation results the doctor can see a little bit more about what could be the changes to the prostate through the treatment period. Currently the doctors from the
clinical side they scan the patient on the first day, so they have an initial scan which is a MRI — very detailed. Then at that point they develop a treatment plan which includes the direction and
the dosage of the radiation. But that’s a static method. It’s not dynamic it’s not adaptive, so using our simulation we hope the doctors can have some predictive knowledge of where the organs could
be and why the organs could be at one have one shape during the treatment. In addition, we like to track the how the material is changing through the radiation period.
ROBERTSON: Specifically, they would like to see if the tumor is changing… hopefully shrinking. And if other organs are being affected by the treatment.
Y. ZHANG: What we are hoping to achieve is we will get adaptive treatment plan and individualized for each patient. No two patients are the same. Cancer development varies from patient to patient,
their ages, their health conditions, their family histories, all different. What we are trying to do here that is novel is we want to include bio mechanical modeling the simulations we want to
include the tensor visualization on the material stress tensors.
E. ZHANG: We believe that tensor field visualization and analysis is key to the medical applications that we are talking about here as well as many other application going back to earthquake, tsunami
analysis. This is a new direction for the graphics community but I would want to go bigger than that say it’s actually something that faces the whole scientific community — is to look at ways of
modeling everything including the shape and the materials properties on the object. However, it is very challenging as Yue has mentioned, the fact that we are not doing biopsies on people then we
only have the scans so we are sort of limited to extracting information about material properties through geometry information, like pixels the color of pixels and there is a lot of guessing work, so
I’m hoping that there will be other ways, but non-invasive still to help us in order to model the material properties like the tumors.
[MUSIC: Oceanside Drive, Ethereal Delusions, used with permission of the artist.]
ROBERTSON: 3D modeling is inherently cool, and know you may know a little bit more about how complex it is. In conclusion I asked Eugene what it is about this project that sparked his interest.
E. ZHANG: It’s interesting to reflect why I’m very interested in the problem. I guess growing up I had always been sort of interested in abstract things like mathematics geometry. I have also been
very interested in science, but I have always considered them separate and unconnected. And this project is one of the projects that I finally feel like the two sides of my interests have started to
converge. Where we are doing mathematically motivated research but with real impact where we really want to help patients to survive, to overcome cancer. I’m really hopeful that the techniques we are
developing or the tools we are building will useful not only to architects or artists in Disney or Pixar but also available to scientists, doctors that could actually save lives and overcome all
these diseases including cancer, various forms of cancer, and AIDS.
ROBERTSON: Okay, my friends, that concludes our first episode on human and environmental health stay tuned for more episodes on how researchers in engineering are working to improve our lives. And
remember we want your feedback on the new format for Engineering Out Loud. You can email us at engineeringoutloud@oregonstate.edu or send a message to our Facebook account.
Source: Oregon State University under the Creative Commons License.
1. Rachel Robertson says
Thanks for recognizing our podcast! It’s actually from Oregon State University though, and not the University of Oregon.
□ Rich Brueckner says
Got it fixed!
☆ Rachel Robertson says
Thanks much!
|
{"url":"https://insidehpc.com/2017/08/podcast-targeting-cancer-3d-modeling-simulation/","timestamp":"2024-11-12T02:42:40Z","content_type":"application/xhtml+xml","content_length":"126762","record_id":"<urn:uuid:b638e398-22f9-49e5-96d1-232e9c41e7dc>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00373.warc.gz"}
|
Measurement function for constant turn-rate and velocity-magnitude motion model
Since R2024b
measurement = ctrvmeas(state) returns the expected measurement for a state based on the constant turn-rate and velocity-magnitude motion model. The state argument specifies the current state.
ctrvmeas represents velocity in the xy-plane with velocity magnitude and direction. For the measurement function for constant turn-rate and velocity-magnitude motion model using Cartesian components,
Vx and Vy, see ctmeas.
Measure State Using Constant Turn-Rate and Velocity-Magnitude Motion in Rectangular Frame
Define the state of an object in 2-D motion with constant turn rate and constant velocity magnitude. The state includes the position in each dimension, the velocity magnitude, the course direction,
and the turn rate. Measurements are in rectangular coordinates, and the z-component of the measurement is zero.
state = [1;10;2;20;5];
measurement = ctrvmeas(state)
Measure State Using Constant Turn-Rate and Velocity-Magnitude Motion in Spherical Frame
Define the state of an object in 3-D motion with constant turn rate and constant velocity magnitude. The state includes the position in each dimension, the velocity magnitude, the course direction,
and the turn rate. Measurements are in spherical coordinates. The elevation of the measurement is zero and the range rate is positive indicating that the object is moving away from the sensor.
state = [1;10;2;20;5;1.5;0];
measurement = ctrvmeas(state,'spherical')
measurement = 4×1
Measure State Using Constant Turn-Rate and Velocity-Magnitude Motion in Translated Spherical Frame
Define the state of an object in 2-D motion with constant turn rate and constant velocity magnitude. The state includes the position in each dimension, the velocity magnitude, the course direction,
and the turn rate. The measurements are in spherical coordinates with respect to a frame located at [20;40;0]. The elevation of the measurement is zero and the range rate is negative indicating that
the object is moving toward the sensor.
state = [1;10;2;20;5];
measurement = ctrvmeas(state,'spherical',[20;40;0])
measurement = 4×1
Measure State Using Constant Turn-Rate and Velocity-Magnitude Motion with Measurement Parameters
Define the state of an object in 2-D motion with constant turn rate and constant velocity magnitude. The state includes the position in each dimension, the velocity magnitude, the course direction,
and the turn rate. The measurements are in spherical coordinates with respect to a sensor located at [-1;-2;0], moving at 2 m/s along the x-axis, and rotated by 90 degrees about the z-axis relative
to the global frame.The elevation of the measurement is zero and the range rate is positive indicating that the object is moving away from the sensor.
state2d = [1;10;2;20;5];
frame = 'spherical';
sensorpos = [-1;-2;0];
sensorvel = [2;0;0];
laxes = [0 -1 0; 1 0 0; 0 0 1];
measurement = ctrvmeas(state2d,frame,sensorpos,sensorvel,laxes)
measurement = 4×1
Put the measurement parameters in a structure and use the syntax with the measurementParameters argument.
measparm = struct('Frame',frame,'OriginPosition',sensorpos, ...
measurement = ctrvmeas(state2d,measparm)
measurement = 4×1
Display Residual Wrapping Bounds
Specify a 2-D state and specify a measurement structure such that the function outputs azimuth, range, and range-rate measurements.
state = [1;10;2;20;5];
mp = struct("Frame","Spherical", ...
"HasAzimuth",true, ...
"HasElevation",false, ...
"HasRange",true, ...
Output the measurement and wrapping bounds using the ctrvmeas function.
[measure,bounds] = ctrvmeas(state,mp)
measure = 2×1
bounds = 2×2
-180 180
-Inf Inf
Input Arguments
state — Current state
real-valued five-element row or column vector | real-valued seven-element row or column vector | 5-by-N real-valued matrix | 7-by-N real-valued matrix
Current state for constant turn-rate motion, specified as a real-valued vector or matrix.
• When you specify the current state as a five-element vector, the state vector describes 2-D motion in the xy-plane. You can specify the state vector as a row or column vector. The components of
the state vector are [x;y;s;theta;omega], where:
□ x and y represent the x-coordinate and y-coordinate in meters.
□ s represents the velocity magnitude in meters/second.
□ theta represents the course direction in the xy-plane, counter-clockwise with respect to the x-axis, in degrees.
□ omega represents the turn-rate in degrees/second.
• When you specify the current state as a seven-element vector, the state vector describes 3-D motion. You can specify the state vector as a row or column vector. The components of the state vector
are [x;y;s;theta;omega;z;vz], where:
□ x and y represent the x-coordinate and y-coordinate in meters.
□ s represents the velocity magnitude in meters/second.
□ theta represents the course direction in the xy-plane, counter-clockwise with respect to the x-axis, in degrees.
□ omega represents the turn-rate in degrees/second.
□ z represent the position in the vertical plane in meters.
□ vz represents velocity component in the vertical plane in meters/second.
• When you specify the current state as a 5-by-N or 7-by-N real-valued matrix, each column represents a different state vector, and N represents the number of states.
Example: [0;300;15;40;0.5]
Data Types: single | double
measurementParameters — Measurement parameters
structure | array of structures
Measurement parameters, specified as a structure or an array of structures. This table lists the fields in the structure.
Field Description Example
Frame used to report measurements, specified as one of these values:
• 'Rectangular' — Detections are reported in rectangular coordinates.
• 'Spherical' — Detections are reported in spherical coordinates.
Frame 'spherical'
In Simulink, when you create an object detection Bus, specify Frame as an enumeration object of fusionCoordinateFrameType.Rectangular or
fusionCoordinateFrameType.Spherical because Simulink does not support variables such as a character vector that can vary in size.
OriginPosition Position offset of the origin of the frame relative to the parent frame, specified as an [x y z] real-valued vector. [0 0 0]
OriginVelocity Velocity offset of the origin of the frame relative to the parent frame, specified as a [vx vy vz] real-valued vector. [0 0 0]
Orientation Frame rotation matrix, specified as a 3-by-3 real-valued orthonormal matrix. [1 0 0; 0 1
0; 0 0 1]
Logical scalar indicating if azimuth is included in the measurement.
HasAzimuth 1
This field is not relevant when the Frame field is 'Rectangular'.
HasElevation Logical scalar indicating if elevation information is included in the measurement. For measurements reported in a rectangular frame, and if HasElevation is false, the 1
reported measurements assume 0 degrees of elevation.
Logical scalar indicating if range is included in the measurement.
HasRange 1
This field is not relevant when the Frame is 'Rectangular'.
Logical scalar indicating if the reported detections include velocity measurements. For a measurement reported in the rectangular frame, if HasVelocity is false, the
HasVelocity measurements are reported as [x y z]. If HasVelocity is true, the measurement is reported as [x y z vx vy vz]. For a measurement reported in the spherical frame, if 1
HasVelocity is true, the measurement contains range-rate information.
IsParentToChild Logical scalar indicating if Orientation performs a frame rotation from the parent coordinate frame to the child coordinate frame. When IsParentToChild is false, then 0
Orientation performs a frame rotation from the child coordinate frame to the parent coordinate frame.
If you want to perform only one coordinate transformation, such as a transformation from the body frame to the sensor frame, you must specify a measurement parameter structure. If you want to perform
multiple coordinate transformations, you must specify an array of measurement parameter structures. To learn how to perform multiple transformations, see the Convert Detections to objectDetection
Format example.
Data Types: struct
Output Arguments
bounds — Measurement residual wrapping bounds
real-valued two-element row vector | M-by-2 real-valued matrix
Measurement residual wrapping bounds, returned as a two-element real-valued row vector or an M-by-2 real-valued matrix, where M is the size of each measurement. Each row of the matrix corresponds to
the lower and upper bounds, respectively, of each measurement in the measurement output.
The function returns different bound values based on the frame input.
• If you specify frame as 'Rectangular', each row of the matrix is [-Inf Inf], indicating that the filter did not wrap the measurement residual.
• If you specify frame as 'Spherical', the function returns bounds for each measurement based on the following:
□ When HasAzimuth = true, the matrix includes a row of [-180 180], indicating that the filter wrapped the azimuth residual in the range of [-180 180] in degrees.
□ When HasElevation = true, the matrix includes a row of [-90 90], indicating that the filter wrapped the elevation residual in the range of [-90 90] in degrees.
□ When HasRange = true, the matrix includes a row of [-Inf Inf], indicating that the filter did not wrap the range residual.
□ When HasVelocity = true, the matrix includes a row of [-Inf Inf], indicating that the filter did not wrap the range rate residual.
If you set any of the fields to false, the returned bounds do not contain the corresponding row. For example, if HasAzimuth = true, HasElevation = false, HasRange = true, HasVelocity = true, then the
function returns the bounds as:
-180 180
-Inf Inf
-Inf Inf
The filter wraps the measuring residuals based on this equation:
where x is the residual to wrap, a is the lower bound, b is the upper bound, mod is the remainder after division, and x[wrap] is the wrapped residual.
Data Types: single | double
More About
Azimuth and Elevation Angle Definitions
The azimuth angle of a vector is the angle between the x-axis and its orthogonal projection onto the xy-plane. The angle is positive when going from the x-axis toward the y-axis. Azimuth angles lie
between –180 and 180 degrees. The elevation angle is the angle between the vector and its orthogonal projection onto the xy-plane. The angle is positive when going toward the positive z-axis from the
Extended Capabilities
C/C++ Code Generation
Generate C and C++ code using MATLAB® Coder™.
Version History
Introduced in R2024b
|
{"url":"https://fr.mathworks.com/help/fusion/ref/ctrvmeas.html","timestamp":"2024-11-03T16:34:34Z","content_type":"text/html","content_length":"142479","record_id":"<urn:uuid:9c533b76-79f9-4b72-96df-1e3e1ee88817>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00121.warc.gz"}
|
7 Inch Pizza - Battista's Pizzeria
7 Inch Pizza
If you’re looking for a pizza place that uses just about every word on your list of foods to eat, then you’ll want to check out 7 Inch Pizza. They’re a small-ish place, but they make sure to include
every word on their food list. For example, there’s no-fail pizza recipes like “Pizza nuastra,” “Pizza with Basil,” etc. But they’re also not afraid to experiment with different toppings and flavors,
so you know you’re getting a good food experience without breaking the bank.
How Big Is a 7 Inch Pizza?
A 7 inch pizza is usually considered a specialty pizza, has 4 slices, and serves 1 adult.
Is 7 Inch Pizza Enough for 1?
A 7 inch pizza feeds 1 adult, depending on how hungry you are.A 7 inch pizza has 4 slices, but each slice is quite small so you will want to eat it all yourself.If you’re a man and are very hungry,
you may want to size up. For everyone else, a 7 inch pizza is enough for 1.
How Many Slices In a 7 Inch Pizza?
A 7 inch pizza will come in four slices.
How Many Calories In a 7 Inch Pizza?
The calories in a slice of pizza are dramatically different depending on the toppings and exact size of each slice.The total number of calories in a 7 inch pizza is roughly 539. This comes out to 135
calories per slice.
What Size Is a 7 Inch Pizza?
A 7 inch pizza is usually considered a specialty pizza.
Use the chart below to compare it to other common sizes.
How Big Is a 7 Inch Pizza (Total Area)?
The total size of a 7 inch pizza is about 10 square inches per slice.You can calculate the total area of any pizza by multiplying the radius of any pizza by.If you don’t feel like doing math, just
use my chart to compare a 7 inch pizza to the other most popular pizza sizes.
7 Inch Pizza Size Comparison
If you’re trying to decide whether or not a 7 inch pizza is big enough, here’s a chart you can use:The difference in diameter is only 3 inches, a 10 inch pizza is 108% bigger than a 7 inch pizza.
even though the diameter of each size is only slightly different, the total amount of pizza you’re getting from each additional size is very different.
7 Inch Pizza Size Comparisons
We know how big is 7 inch pizza, so we can take a closer look at how it compares to other popular pizza sizes. We can also learn how to make perfect 7 inch pizza by using the other popular pizza
sizes.A 7 inch pizza is about 10 inches bigger than a 10 inch pizza. A 10 inch pizza has the same number of slices as a 7 inch pizza, but each slice is about 108% bigger!7 Inch Pizza is 3x bigger
than 12 Inch Pizza. A 12 Inch Pizza has 2 more slices than a 7 Inch Pizza and feeds 1-2 more adults.A 14 inch pizza is a large pizza and serves 3-4 adults, whereas a 7 inch pizza feeds just 1. A
14 inch pizza is about 400% bigger than a 7 inch pizza.A 17 inch pizza serves 4-5 adults and is more than 6 inches bigger than a 7 inch pizza. It also has 3.5x as many slices as a 7 inch pizza.An
extra large pizza is about 18 inches larger than a 7 inch pizza. It serves 4-6 adults and 12 slices, making it the much better option for a large group of people.
Why Pizza Size is Important
Pizza size is important for two reasons: serving size and price.2. Serving Cost
3. user input
4. Cost of Goods
5. Prices in Each City
6. Prices in Each State
7. Prices in Each Country
8. Price of Local Price
9. Accessories
10. Price of Price
11. Price of Price of Local Price
12. Price of Price of Local Price
13. Price of Price of Local Price
14. Price of Price of Local Price
15. Price of Price of Local Price
16. Price of Price of Local Price
17. Price of Price of Local Price
18. Price of Price of Local Price
19. Price of Price of Local Price
20. Price of Price of Local Price
21. Price of Price of Local Price
22. Price of Price of Local Price
23. Price of Price of Local Price
24. Price of Price of Local Price
25. Price of Price of Local Price
26. Price of Price of Local Price
27. Price of Price of Local Price
28. Price of Price of Local Price
29. Price of Price of Local Price
30. Price of Price of Local PriceYou always want to make sure you have enough pizza. There is nothing worse than running out of pizza, especially when you were the one in charge of ordering it. Use
the chart above to determine how much pizza and in what sizes you should order for your group.For example, if I was serving 8 adults, I would order 2 extra large, 16 inch pizzas.This is a breakdown
of pizza sizes.Price is a noun meaning value or price of something of value.You can get more pizza for a better price by ordering a larger size. 2 medium pizzas will be much more expensive than 1
large, and you will only be getting slightly less pizza.If you’re trying to decide between 2 sizes, I always recommend ordering the bigger size. Who doesn’t love leftover pizza?And remember, just a
pizza that is just 2 inches bigger in diameter will have much more pizza.If you’re looking for tips on how to choose the right size for your next pizza night, we’ll start with some recommendations.
7 Inch Pizza Size Tips: How To Choose The Perfect Pizza Size For Your Occasion
Now that we have answered how big is 10 inch pizza, here’s how to pick the perfect pizza size for your next gathering: keep these things in mind.If you’re ordering pizza for just yourself, a 7 inch
pizza is probably perfect. If you’re very hungry, want leftovers, or another person will also be eating, I would recommend ordering a 10 or 12 inch pizza.When choosing a pizza size, consider your
budget. 7 inch pizzas are more expensive than 2 inch pizzas.If you’reacterially eating pizza alone, a 7 inch pizza will be perfect. If there’s anyone else coming, it’s better to order a bigger size
and share.
Final Word: 7 Inch Pizza
If I am ordering pizza alone and the restaurant I am buying from offers it, a 7 inch pizza is my go-to order.It is the perfect amount for 1 person.You can eat the entire pie in one sitting and still
have 539 calories.A 7 inch pizza has 4 slices, serves 1 adult, is 38 square inches, and contains about 539 calories. It’s the perfect size for 1 person to eat.There are four slices in a 7 inch
pizza.In most cases, a 7 inch pizza is enough for 1.5 people.If you’re extremely hungry, a 7 inch pizza might not be enough.A 7 inch pizza is the perfect size for 1 person. It has 4 slices, is 38
square inches, and contains about 539 calories.
Pizza Size Chart
Before reading the chart, be sure to keep these things in mind:
– your own mood and moody style
– the different types of charts you see
– how to deal with mood swings
– how to stay on track
– how to get through nighttime activities\u2022 Without further ado, here’s my pizza size chart:
\u2022’s pizza size chart is composed of smaller pages, each with a different size.The total amount of pizza you’re getting from each additional size is very different.To calculate the size of any
pizza, use this formula:
x rWe take the radius of 5 (10 / 2), square it to make 25 (5 x 5), and multiply it by 3.14159 to give us a total area of 79 square inches.The rest of the article will break down pizza sizes by area,
number of slices, and number of servings.
How Big Is a Personal Pizza? (How Big Is a Specialty Pizza)?
A personal pizza will have a diameter of 7, 8, or 9 inches depending on where you order it from. You can expect it to be cut into 4 slices and will have 38.5 square inches, 50 square inches, or 64
square inches depending on if you order a 7-, 8-, or 9-inch pizza.
How Big Is a Small Pizza?
A small pizza has a diameter of 10 inches and will contain 79 square inches of pizza.
How Big Is a Medium Pizza?
A medium pizza has a diameter of 12 inches and has a total of 113 square inches.
How Big Is a Large Pizza?
A large pizza has a diameter of 14 inches and will contain 184 square inches of pizza.
How Big Is an Extra Large Pizza?
An extra-large pizza has a diameter of 16 inches and has 201 square inches of pizza.
How Big Is a Jumbo Pizza? (How Big Is a Party Pizza?)
A jumbo pizza will not be available from most restaurants. It will have a diameter of 18 inches and will be contains 254 square inches of pizza.
Number of Slices in a Pizza (Pizza Size Comparison)
Different pizza sizes have different numbers of slices and different areas per slice. I will compare pizza sizes by both of these factors.
Number of Slices in a Personal Pizza
A pizza with four slices is a personal pizza, regardless of whether it is a seven- or nine-inch pizza.A pizza will have 10, 13, or 16 square inches of pizza per slice.
Number of Slices in a Small Pizza
A small pizza typically has 4 slices, but will sometimes come in 6 slices.A small pizza will have 20 square inches of pizza per slice.
Number of Slices in a Medium Pizza
A medium pizza has 6 slices.A medium pizza will have 19 square inches of pizza per slice.
A large pizza has 8 slices.A large pizza will have at least 19 square inches of pizza per slice.
Number of Slices in an Extra Large Pizza
An extra large pizza has 10 slices.An extra large pizza will have 20 square inches of pizza per slice.
Number of Slices in a Jumbo Pizza
A jumbo pizza has 12 slices.A jumbo pizza will have 21 square inches of pizza per slice.
Servings per Pizza (Pizza Size Comparison)
The number of servings in a pizza can vary a lot based on the number of toppings, the taste, and how hungry the eaters are.For the sake of this article, 1 serving will equate to the amount an
average adult can eat comfortably in one sitting. If you have children, you can consider 3 of them being equal to 2 adults.
How Many People Can a Personal Pizza Serve?
A pizza serving 1 adult or 2 small children is 1 cup.
How Many People Can a Small Pizza Serve?
A small pizza can serve 1-2 adults or 2-3 children.
How Many People Can a Medium Pizza Serve?
A medium pizza can serve 3-4 adults or 4-5 children.
How Many People Can a Large Pizza Serve?
A large pizza can serve 3-4 adults or 5-6 children.
How Many People Can an Extra Large Pizza Serve?
An extra large pizza can serve 3-5 adults or 5-8 children.
How Many People Can a Jumbo Pizza Serve?
A jumbo pizza can serve 4-6 adults or 6-9 children.
Other Factors to Consider When Ordering Pizza
Before you place your order, there are a few things you should keep in mind.
-You should read the product sheet and notice how many products are in stock.
-You should research the product’s content and whether it will fit your needs.
-You should contact the supplier of the product you order to learn about shipping and handling.
-You should review the product’s weight and measure it against your usual size.
-You should also consult the product’s weight and measure it against your usual size.
If you order a product, you should read the product sheet and notices that there are in stock. You should also consult the product’s weight and measure it against your usual size.That’s a wrap on my
pizza sizes and pizza size comparisons!Now, order some pizza!
Pizza Ordering Guide (Adults)
When ordering pizza, you will get the best value for ordering the largest sizes. It’s cheaper to order 1 extra large than it is to order 2 mediums, even though the total amount of pizza is usually
the same.I\u2019m also going to assume your pizza joint offers extra large (16-inch) pizzas as its largest size and that there will be no appetizers.
How many pizzas for 5 people?
You should order one extra large pizza for five people.
How many pizzas for 8 people?
You should order two extra large pizzas for 8 people.
How many pizzas for 15 people?
You should order 3-4 extra large pizzas for 15 people.
How many pizzas for 20 people?
You should order four-five extra large pizzas for 20 people.
How many pizzas for 30 people?
You should order 6-8 extra large pizzas for 30 people.
How many pizzas for 40 people?
You should order 8-10 extra large pizzas for 40 people.
How many pizzas for 5 kids?
You should order 1 large pizza for 5 kids.
How many pizzas for 8 kids?
You should order 2 large pizzas for 8 kids.
How many pizzas for 15 kids?
You should order 2-3 extra large pizzas for 15 kids.
How many pizzas for 20 kids?
You should order three large pizzas for 20 children.
How many pizzas for 30 kids?
You should order 4-6 extra large pizzas for 30 kids.
How many pizzas for 40 kids?
You should order six large pizzas for 40 children.
Welcome to my blog! I’m Kenelm Frost, a passionate cook who loves making pizza and pasta. Through this blog, I share tips, tricks, and recipes to help fellow foodies create amazing Italian dishes at
|
{"url":"https://battistaspizzeria.com/pizza/7-inch-pizza/","timestamp":"2024-11-02T15:37:04Z","content_type":"text/html","content_length":"105655","record_id":"<urn:uuid:a162ddf8-3d49-46e1-863e-d1fa22441b65>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00792.warc.gz"}
|
Arrow Speed Calculator
This calculator is based on average efficiency gains as arrow weight is increased from 5 grains per pound (IBO). The most critical factor is the IBO speed entered as this is the starting point of
the calculations.
Most IBO ratings are not real world numbers. In the majority of cases once a peep is added the speed will only be a few fps off but the rating of some bows are significantly off to begin with. I set
this calculator up assuming a standard peep. If your peep is significantly heavier than normal or you have a bunch of other stuff on your string then that will be causing your base rating to
Also, every bow gains efficiency a little differently. The worse the bows efficiency at IBO the more growth it will see as arrow weight increases. I created an average efficiency curve based off of
bows from APA, Elite, Bowtech, Hoyt, and PSE ranging from 40lbs up to 85lbs. This average will get most bows fairly close but if your bow is on the far end of the efficiency spectrum then it will be
off a bit.
The upside is that even if it's off due to this the numbers should be much closer than any of the other calculators out there. That was the main goal of this project.
For the most accurate results it is best to use a known speed and arrow weight under 6 grain per pound to fine tune where your base IBO should be.
Example - A 70#/29" bow that's rated at 350 fps is has been chronographed at 315 fps with a 400 grain arrow. At the rated 350 fps IBO the speed should be 319 so we know that the IBO is slightly
high. Reduce the IBO speed until the calculator shows the verified speed of 315; in this case the correct IBO value would be 346 fps
IBO Speed
(feet per second)
Calculated Arrow Speed
Kinetic Energy
|
{"url":"https://dangercloseoutdoors.com/pages/calculators","timestamp":"2024-11-08T04:14:05Z","content_type":"text/html","content_length":"78442","record_id":"<urn:uuid:c13921e1-e7ca-4e56-b0b5-e08781bc9008>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00626.warc.gz"}
|
Induction logging technique - Patent 0084001
This invention relates to electrical induction logging systems for determining the nature and characteristics of the various sub-surface formations penetrated by a borehole in the earth. More
particularly, this invention relates to a method and system for processing the induction log measurements whereby the non-linear variations in the sonde response function as a function of the
conductivity of the formations being investigated and the unwanted contributions to each log measurement from currents flowing in formations spaced apart from the measurement depth are attenuated.
It is important to the oil and gas industry to know the nature and characteristics of the various sub-surface formations penetrated by a borehole because the mere creation of a borehole (typically by
drilling) usually does not provide sufficient information concerning the existence, depth location, quantity, etc., of oil and gas trapped in the formations. Various electrical techniques have been
employed in the past to determine this information about the formations. One such technique commonly used is induction logging. Induction logging measures the resistivity (.or its inverse,
conductivity) of the* formation by first inducing eddy currents to flow in the formations in response to an AC transmitter signal, and then measuring a phase component signal in a receiver signal
generated by the presence of the eddy currents. Variations in the magnitude of the eddy currents in response to variations in formation conductivity are reflected as variations in the receiver
signal. Thus, in general, the magnitude of a phase component of the receiver signal, that component in-phase with the transmitter signal, is indicative of the conductivity of the formation.
In theory, the electrical resistivity of the formation should be relatively high (or the conductivity rela- t'ively low) when the formation contains a high percentage of hydrocarbons because
hydrocarbons are relatively poor conductors of electricity. Where hydrocarbons are not present in the formations and the formations contain salt water, the electrical resistivity of the formation
should be relatively low. Formation water, typically salty, is a relatively good conductor of electricity. Induction resistivity logging tools thus obtain information about the formations which can
be interpreted to indicate the presence or absence of these hydrocarbons.
U.S. patents 2,582,314; 3,340,464; 3,147,429; 3,179,879 and 3,056,917 are illustrative of typical prior-art well logging tools which utilize the basic principles of induction logging. In each of the
tools disclosed in these patents, a signal generator operates to produce an AC transmitter signal which is applied to a transmitter coil. The current in the transmitter coil induces in the formations
a magnetic field, which, in turn, causes eddy currents to flow in the formations. Because of the presence of these formation currents, a magnetic field is coupled into a receiver coil R thereby
generating a receiver signal. (Logging tools having "a receiver coil" and "a transmitter coil" each comprising several coils arranged in a predetermined geometrical fashion to obtain a desired
response are commonly used. These coil systems are sometimes referred to as "focused" coil systems.) The receiver signal is then amplified and applied to one or more phase sensitive detectors (PSDs).
Each PSD detects a phase component signal having the same phase as a phase reference signal which is also applied to the detector. The phase reference signal has a predetermined phase relationship to
the current in the transmitter coil(s). The output of the PSD(s) may be further processed downhole, or may be sent uphole to surface equipment for processing or display to an operating engineer.
A quantitative determination of the conductivity of the formations is based in large part on the value obtained for the phase component signal that is in-phase with the transmitter current in the
transmitter coil. This component signal is commonly referred to as the real or in-phase (R) component. Measurement of a phase component signal which has a phase orthogonal to (or, in quadrature to)
the transmitter current is sometimes obtained. This component signal is commonly referred to as the quadrature-phase (X) component signal.
Measurement of both the R and X phase component signals of the receiver signal is known. U.S. patents 3,147,429 and 3,179,879 both disclose induction logging apparatus which detects real and phase
quadrature components (designated in those patents as V and V
', respectively) of the receiver signal from the receiver coil. The tools disclosed in these patents show the output from a receiver amplifier being applied to ideally identical PSD circuits, one for
detecting the R component signal and the other for detecting the X component signal. Appropriate phase shifting components are provided for generating the phase quadrature phase reference signals
required by the PSDs in order to resolve the phase components signals.
In addition to the hardware limitations addressed by the system disclosed in the incorporated patent application, the methods for determining true formation resistivity at any particular depth from
induction log measured data have in the prior art been adversely affected in cases where the true conductivity of adjacent bedding formations varies over a wide dynamic range.
To interpret the tool voltage measurements representative of the true formation conductivity requires a sonde response function relating formation conductivity to output voltage measurements of the
tool. This sonde response function is typically known as the vertical sensitivity curve of the induction tool sonde. For homogeneous formations, the sonde response function for a typical induction
sonde can best be described as a response curve which has a main lobe of finite width from which the majority of the signal originates. Sidelobes to each side of the main lobes with non-zero
amplitudes extend longitudinally up and down the borehole from the center of the sonde with decreasing amplitude.
A term widely used by those skilled in the art to describe this sonde response function is the "vertical geometrical factor" of an induction tool. The vertical geometrical factor (GF) is the sonde
response function measured in a homogeneous formation having zero conductivity (infinite resistivity). As is discussed below, the character of the sonde response function varies with the conductivity
of the formations being investigated. Thus, the GF is a special situation (zero conductivity) for the sonde response function. The condition of zero conductivity is not often encountered in induction
logging, although low conductivity formations are regularly encountered. The term, general geometrical factor (GGF), is often used to describe the sonde response function regardless of the
conductivity at which a given response curve is obtained.
Because of the non-zero sidelobes of the sonde response function, currents flowing in the formations above and below the sonde provide an unwanted contribution to the log measurements. For example,
where the main lobe of the sonde response function is investigating a thin bed of low conductivity, the conductivity measurement will be too large if the thin bed is located near adjacent beds of
higher conductivity. This unwanted contribution is referred to by those skilled in the art as the "shoulder effect," and generally is meant to describe the incorrect interpretation of the sonde
measurements resulting from the non-zero sidelobes in the sonde response function.
The character of these sidelobes of the sonde response function has in the past been controlled by the geometry of the sonde coupled with the physics of induction logging. Various attempts have been
made in the past to minimize these sidelobes, for example, using multiple transmitter and receiver coils arranged in predetermined relationships within the sonde itself. U.S. patents 2,582,314 and
3,067,383 illustrate induction logging sondes in which multiple coils arranged in arrays are used to "focus" the sonde response function response curve to narrow the width of the main lobe and
attenuate the sidelobes. U.S. patent 2,790,138 discloses an induction logging tool in which two separate induction coil arrangements are used, both arrangements having the same geometrical center
with an inner transmitter-receiver coil pair physically located between an outer transmitter receiver coil pair. Assuming that both coil pairs have the same sidelobe responses at vertical
displacements greater than some fixed distance from the center of the sonde, by subtracting the signal from one coil pair from the other will reduce the effect of the contribution of formations
spaced apart from the center of the sonde beginning at the fixed distance outwardly.
These focused coil systems, and such techniques as disclosed in U.S. patent 2,790,138, have not been able to effectively reduce the sidelobes of the sonde response function to a level which will
permit the logging tool to measure the conductivity of the formations accurately over several decades of magnitude. Because of the complexity of these focused coil arrangements, and the problems of
mutual coupling and the difficulty in constructing the sonde, resort to more elaborate focused arrangements to further reduce the sidelobes has already reached a point of diminishing returns.
In addition to the shoulder effect phenomenon discussed above, there is yet another problem which limits the ability of the induction logging equipment to accurately obtain a measure of the true
conductivity of the formations over a wide dynamic range. This problem is characterized by the non-linear changes in the profile of the sonde response function as a function of formation
conductivity. As the conductivity of the formation being investigated increases, the amplitude and shape of both the sonde response function's main lobe and its sidelobes changes, and these changes
are non-linear with increasing conductivity. This characteristic is referred to as "skin effect." The skin effect phenomenon has also been described as that error signal which degrades the in-phase
component measurement of the conductivity to produce an incorrect value. This skin effect phenomenon results primarily from the mutual interaction with one another of different portions of the
secondary current flow in the formation material. The magnitude of this skin effect phenomenon also increases as the system operating frequency increases.
Prior art has shown that, among other things, the magnitude of this skin effect phenomenon is a complex and complicated function of the coil system operating frequency, the effective length of the
coil system, and the conductivity value of the adjacent formation material. The last-mentioned factor renders this phenomenon particularly objectionable because it produces the above-mentioned
non-linear variation in the output signal of the sonde. The occurrence of these non-linear variations can be substantially eliminated for a large range of formation conductivity values by proper
choice of the coil system, operating frequency and the effective coil system length. These factors, however, place undue restraints on the construction and operation of the coil and associated
circuits. These restraints, in turn, limit other desirable features of the coil system apparatus. For example, it is frequently desired that the coil system be able to accurately determine the
conductivity value of the formation material in a region lying at a substantial lateral distance from the borehole. This requires a relatively large coil spacing or coil system length. A large
spacing, however, increases the percentage of non-linearity resulting from the occurrence of skin effect. As another example of undesirable restraint, the signal-to-noise ratio of the induction
logging apparatus can be improved by increasing the tool's operating frequency. This, however, also increases the skin effect non-linearity.
If the conductivity of the formations being investigated is near zero, the GF response curve yields values of conductivities that are free of the skin effect phenomenon. But at higher conductivities,
the skin effect, as reflected as a change in the sonde response function, causes the conductivity values obtained from the measurements of the tool to be in error. U.S: patent 3,147,429 characterizes
this skin effect error as a voltage which subtracts from the "geometrical factor" signal predicted by the linear theory on which the GF response curve is based and which is well-known in the art.
U.S. patent 3,147,429 also discusses the skin effect phenomena as it relates to the quadrature-phase component X of each conductivity measurement. Those skilled in the art have recognized that the
magnitude of the X component is a function of the conductivity value of the formation material being investigated.
The logging system of U.S. patent 3,147,429 assumes that, to a degree, the magnitude of the quadrature-phase component measurement X is equal to the magnitude of the skin effect error signal. Since
the skin effect error signal tends to decrease the measurement from that which would obtain if the GF were the proper response curve for the formations being investigated, the in-phase component
measurements can be corrected by adding an adjusted quadrature-phase component where the adjustment is made dependent on the magnitude of the X component. While this approach yields some correction
to the in-phase component measurement for the skin effect error, there is no attention given in the prior art to the origin within the formation from where the skin effect error signal originates.
Rather, the prior art corrects for skin effect based only on the magnitude of the component of the conductivity measurement itself. In other words, the spatial aspects of the skin effect error
signals are totally ignored by the prior art.
As shown in the case of the shoulder effect phenomenon previously discussed, a consideration of the spatial aspects of the system transfer function is important if a true and accurate measurement of
the formation conductivity over a wide dynamic range of conductivities is to be obtained. The skin effect error also has a spatial aspect, because the conductivity of the formations being
investigated may not be homogeneous throughout or that the formations adjacent the borehole may be invaded by the drilling mud. The shape and character of the spatial response function for the skin
effect error signal can be defined as the difference between the GF response curve and the sonde response curves as measured at different values of conductivity. For these curves, it can be seen that
the contributions of formations longitudinally displaced along the borehole from the point of the measurement contribute varying amounts to the skin effect error signal, even when a homogeneous
medium is assumed. A gross adjustment to the in-phase component measurement based on a pure magnitude reading for the quadrature-phase component is not adequate to compensate for the skin effect
phenomenon so as to permit accurate measurements of the true conductivity over a wide dynamic range in conductivity. Attention must be given to compensating the in-phase measurement based on the
contributions to the skin effect error coming from the various parts of the formations.
Thus, it would be advantageous to provide a method of processing the induction log measurements and a system therefor that reduces the unwanted contributions in the - log measurements from currents
flowing in formations spaced apart from the measurement depth by minimizing the sidelobes in the resulting system response function used to translate the formation conductivity values into the
processed measurements. It would also be advantageous to provide a method of processing the induction log measurements to minimize the effects of the non-linear variations in the sonde response
function resulting from changes in the conductivity of the formations being investigated.
In accordance with the present invention, a system and method of processing induction log measurements to reduce the unwanted contributions in the measurements from formation currents flowing in
formations spaced apart from each measurement depth is disclosed. The log consists of measurements of the sub-surface formations taken by an induction logging system at various depths in a borehole
in the earth. Each log measurement may consist of an in-phase R and/or a quadrature-phase X component. The
logging system has a sonde response transfer function which transforms the formation conductivity function into the measured voltage function of the logging system. The sonde response transfer
function varies with the conductivity of the sub-surface formations being investigated, and includes a main lobe which covers a length of the borehole and non-zero sidelobes that extend outwardly
The method comprises the steps of windowing the spatial domain sonde response function obtained at zero conductivity by truncating its Fourier transform at a spatial frequency less than the frequency
at which the transformed function first goes to zero. (For purposes of this disclosure, hereinafter, any reference to the frequency of a function refers to its spatial frequency as distinguished from
a frequency which is time dependent.) Next, a target transfer function containing the window frequencies is selected. A spatial domain filter response function is then determined from the truncated
transformed zero conductivity sonde response function and the target transfer function, which when convolved with the zero conductivity spatial domain sonde response function, results in a sonde
response function having reduced sidelobes. Finally, the method includes the step of convolving the spatial domain filter response function with the in-phase log measurements to obtain a processed
log in which the unwanted contributions in each measurement from spaced apart formations to each measurement are reduced.
The step of selecting a target transfer function involves selecting a transfer function, such as a Kaiser window function, containing the frequencies remaining in the truncated transform zero
conductivity sonde response function to produce a deconvolution filter that imparts minimum overshoot and ripple in the processed log for step changes in conductivity. The method is further
characterized in that the spatial domain filter response function is determined from the ratio of the target transfer function to the truncated transformed zero conductivity sonde response function.
In another aspect of the invention, a method of processing an induction log to reduce both the unwanted contributions in each measurement from formations spaced apart from each measurement depth and
the effects of variations in the sonde response function with formation conductivity is disclosed. In addition to variations of the sonde response function with conductivity of the sub-surface
formations being investigated, the sonde response function is also characterized by a main lobe and non-zero sidelobes.
The method includes the steps of determining for each of a plurality of formation conductivity ranges a deconvolution filter response function based on the sonde response function where each filter
function, when applied to the in-phase component measurements of formations having conductivities in the range of that filter, reduces the unwanted contributions in the measurement from formations
spaced apart from the depth at which the measurement was made. Finally, each in-phase component measurement is processed by convolving the in-phase component measurements with a deconvolution filter
response function selected from among the plurality of filter functions according to the magnitude of the quadrature-phase components of the measurements being convolved.
The method is further characterized in that the step of determining a deconvolution filter response function for a given formation conductivity range comprises the steps of windowing the spatial
domain sonde response function for the given conductivity range by truncating the Fourier transform of this transfer function at a frequency less than the frequency at which the transformed sonde
response function first goes to zero. A target transfer function containing the window frequencies is then selected, and from the target transfer function and the truncated transformed sonde response
function, a spatial domain filter response function is determined which when convolved with the given conductivity range sonde response function results in a sonde response function having minimum
In accordance with yet another aspect of the invention, a method of processing an induction log to minimize both the unwanted contributions in each measurement from formations spaced apart from each
measurement depth and the effects of variations in the sonde response function with formation conductivity is disclosed. The method includes the steps of determining at each of a plurality of
conductivity values a deconvolution filter response function based on the sonde response function at each value of conductivity. Next, a digital filter for implementing the deconvolution filter
function is determine in which the digital filter has a plurality of coefficients. Each deconvolution filter is implemented by determining the values of the coefficients of the digital filter. A
curve is then fitted for each coefficient to the plurality of values for each coefficient to obtain coefficient functions which vary as a function as a conductivity variable. Finally, the method
includes the step of processing the in-phase quadrature component measurements to reduce the effects of variations in the sonde response function with formation conductivity by convolving the
in-phase component measurements with a deconvolution filter implemented by said digital filter in which the coefficients of the convolved digital filter are determined by solving the coefficient
functions for each coefficient using the quadrature-phase component measurement as the value of the conductivity variable.
For a fuller understanding of the present invention, reference should be had to the following detailed description taken in connection with the accompanying drawings, in which:
Figure 1 is a diagrammatic representation of the induction geometry for a two-coil induction sonde showing the formation induced current at point P in the formation;
Figure 2 is a plot of the real component of the sonde response transfer function of a typical induction logging sonde for various values of formation conductivity;*
Figure 3 is a plot of the curves illustrating skin effect in which measured conductivity is plotted against true conductivity for various values of formation conductivity;
Figure 4 is a plot of an assumed profile of formation resistivity (inverse of conductivity) and the resistivity obtained by typical prior-art techniques;
Figure 5 is a plot of the Fourier transform curves used in obtaining the deconvolution filter response curves of the present invention;
Figure 6 is a plot showing the assumed profile for resistivity and the calculated profile using the ideal low-pass filter response curve as shown in Figure 5;
Figure 7 is an overlay plot of the vertical geometrical factor g[GF](z) as shown in Figure 2, the deconvolution filter response function h[K](z), of the present invention and the equivalent system
response function f[GF](z) after deconvolution of g[GF](z) with h[K](z);
Figure 8 is a plot showing the assumed profile of the formations for resistivity along with the resistivity calculated using the prior-art computing techniques and the deconvolution filter techniques
of the present invention;
Figure 9 is a plot of the deconvolved real component of the system response function f[R](z)obtained at various values of formation conductivity;
Figures 10(a) and (b) are, respectively, plots of the skin effect response function g[SR](z)obtained at various values of formation conductivity and curves of the imaginary component of the sonde
response function g[X](z) obtained at various values of formation conductivity, both figures plotted to the same abscissa;
Figures 11(a) and (b) illustrate, respectively, curves for the matching filtered imaginary component of the sonde response function and the filtered skin effect filter function, both deconvolved
according to the present invention and both curves plotted to the same abscissa;
Figure 12 is a block diagram representation of the phasor processing implementation of the present invention which reduces shoulder effect and skin effect in the processed resistivity log;
Figure 13 is a plot of the resulting system response function resulting from the phasor processing as shown in Figure 12;
Figure 14 is a field log showing the improvement in the accuracy of calculated resistivity according to the present invention; and
Figure 15 is a curve representing the response of a standard induction sonde operating in a layered formation, with each response curve displayed as a "snapshot" of the response at a given location.
Referring to the figures and first to Figure 1, a diagrammatic representation of the induction geometry for a two-coil sonde (one transmitter coil and one receiver coil) is shown. For purposes of
this disclosure, the derivations presented are based on the theory of induction logging as developed for this simple two-coil arrangement. In practice, however, the coil arrangement for a typical
induction sonde is more complex. Even though more complex, the present invention is equally applicable to the log measurements obtained by these more complex coil sondes because the response of a
complex sonde is the linear combination of two-coil sondes.
Two difficult problems in induction logging are correcting measurements for shoulder effect and skin effect. Shoulder effect is the unwanted contribution to the measured conductivity of a thin bed
(of low conductivity) by currents flowing in adjacent more conductive beds. This unwanted contribution results in a measured conductivity that is too large. Figure 4 illustrates a test section of a
hypothetical log showing an assumed profile (see also Figures 6 and 8) for the true formation resistivity and the resistivity log obtained from the prior-art methods of processing the induction log
measurement (see U.S. patent 3,166,709 for a disclosure of one prior-art method of calculating the resistivity of the formations from the mesurements). Where the true resistivity values are high (low
conductivity) followed by a change to low resistivity (high conductivity), an error is seer between the true resistivity and the calculated values- This difference represents the "shoulder effect."
Skin effect is the non-linear response of the induction device to increasing formation conductivity causing the measured conductivity to be less than directly proportional to the true formation
conductivity. This non-linearity is due to the attenuation and phase shift of the electrcmagnetic waves in the conductive formations. The theory of induction logging and this skin effect phenomena
have been discussed extensively in the prior art. The article by Henri-Georges Doll appearing in the June, 1949 issue of the Journal of Petroleum Technology entitled "Introduction to Induction
Logging and Application to Logging of Wells Drilled With Oil Base Mud," pages 148-162 ("Doll article"), and the article by J. H. Moran and K. S. Kunz appearing in Vol. 28, No. 6 of the December, 1962
issue of Geophysics and entitled "Basic Theory of Induction Logging and Application to Study of Two-Coil Sondes," pages 829-858 ("Moran article"), are representative treatments of the theory of
induction logging. Additionally, U.S. Patent 3,147,429 issued to J. H. Moran ("Moran patent") discusses in some detail the skin effect phenomenon.
Summarizing the material as presented in the above cited materials, the voltage measurements obtained by an induction logging tool are representative of the conductivity of the formation based on the
geometrical factor theory (Doll article). Referring still to Figure 1, the induced currect J
(ρ,z) induced at a point P in the formations is the result of a transmitter current J of the form I
. This current induces an eddy current distribution in the surrounding formation which is related to the transmitter position and the formation conductivity distribution. The current flowing in the
formation is computed by solving Maxwell's Equations for the appropriate boundary conditions.
This solution is described in general terms by the retarded potential solution which.implies that the field giving rise to an eddy current at a given point in the formation is dependent on the
magnitude of the current flowing in other parts of the formation. Alternatively, the field can be thought of as propagating through a dispersive medium. All interactive or propagation effects are
described by the retarded potential solution, so, once the current distribution in the formation is computed, the voltage induced in the receiver coil by the formation currents can be computed by
applying the Biot-Savart law and integrating over the volume containing eddy currents. Thus,
where B
is the magnetic field at the receiver coil R, and is given by the following equation:
This leads to a general solution for the receiver signal in terms of the formation conductivity distribution σF( ρ,z,φ) (using cylindrical coordinates ρ,z,φ to represent the formation coordinates)
Here, the function g(ρ,z,φ, a
) represents both the geometrical parts of the coupling and the propagation parts. σ
(z) is the receiver signal in units of conductivity at the position ρ = 0 , z . The function g (ρ,z,φ, σ
) maps the formation conductivity σ
(z) into the measured signal σ
In the homogeneous medium, g( (ρ,z,φ, OF) is given by
where L is the coil spacing,
k^2= iω µ σ[F ]is the propagation constant, r[T] is the vector distance from the transmitter
coil to the formation element ( p,z), and r[R] is the vector distance from the receiver coil to the formation element ( p,z) (the φ dependence disappearing due to the cylindrical symmetry).
Equation 3 does not represent a proper convolution, for the function g( ρ,z,φ, σ
) is not linear. That is,
However, the function g will be referred to as the induction sonde response function because it does describe the mapping of the formation conductivity distribution into the received signal at the
point P=0, z. The function g is, obviously, different at each point along the borehole.
The following derivation considers only cylindrically symmetric formation geometries so the integration over φ disappears. Since the measured signal is constrained by the borehole to be a function of
z only, integration overp obtains the vertical spatial sonde response function g given as follows:
The mapping function g(z, σ
)is, like the general function in Equation 3, a function of σ
(z) and is non-linear with linear changes in σ
(z). The following mapping operation
is also not a proper convolution so the concept of a linear deconvolution is not applicable.
However, in the limit of low conductivity, Equation 4 reduces to
where g
(z) is the geometrical factor of the induction sonde and is constant over all formation conductivities. The function g
(z) is not an accurate description of the induction response at higher condutivities, but the present invention uses g
(z) as the basis for the development of a deconvolution method.
The convolution of g
(z) with the formation conductivity produces a "measurement," σ
(z), given by:
This is what would be measured by an induction sonde if there were no skin effect, and represents a proper convolution. Equation 8, when integrated over p , gives the vertical geometrical factor of
the Doll article:
Since the measured signal σ
(z) given by equation (7) is a complex quantity, σ
(z)= σ
(z)+i σ
(z), and the sonde response function is the weighting function which describes the contribution of the conductivity of each element of the formation to the measured conductivity referred to a given
depth, g(z) must also be complex. Thus,
where g
) is the real component of the sonde response function and is that response function which transforms the in-phase R component measurements and g
(z, σ
) is the imaginary component of the sonde responser function which transforms the quadrature phase X components. While .g(z, σ
) (and its components, g
(z, σ
) and
(z, σ
)) is both a function of depth (z) and conductivity ( σ
), hereinafter the discussion will be in reference to g(z) as a function only of depth, unless otherwise stated, while recognizing that g(z) changes as the conductivity ( σ
) of the formation under investigation changes. Where σ
is approximately zero, g(z, σ
) is real and is defined to be G
(z), the geometrical factor of the induction sonde, and σ
= σR.
Shoulder Effect Correction
The origin of the coordinate system for the expression for the measurement σ
(z) is usually chosen so that the measurement point is on the line z equal to zero (see Figure 1). As shown above, an expression for the sonde response function as a function of formation
conductivity may be found by solving Maxwell's equations in the formation geometry at hand. (For example, see the article "A New Look At Skin Effect" by S. Gianzero and B. Anderson given at the
S.P.W.L.A. 22nd Annual Logging Symposium June 23-26, 1981.) For homogeneous formations, the sonde response functions have been computed for a coil sonde used extensively in commercial resistivity
logging activities as described in U.S. Patent 3,067,383 and Figure 2 illustrates the various computed sonde response functions g
(z) obtaining for different values of formation conductivity for that particular coil system. See for example the geometrical factor described in U.S. patent 2,582,314. At zero conductivity, the
vertical geometrical factor GF curve is obtained. As can be seen from Figure 2, the variations in the sonde response function are significant when the formation conductivity increases. As previously
mentioned, the change in the sonde response function is non-linear with a linear increase in the formation conductivity.
The non-linearity of the sonde response function with increases in formation conductivity may be better understood by referring to Figure 3 which is a plot of the real (R) and the quadrature-phase
(X) components of each log measurement as functions of formation conductivity. As shown in Figure 3, when the true formation conductivity (σ
) is small, it is approximately equal to the in-phase component σ
of the log measurement. However, as σ
becomes larger, the σ
component measurements deviate from a true straight line (curve F). With increasing σ
, the quadrature-phase component Q
also increases. Thus, for large values of σ
, OR deviates significantly from σ
. (See for example U.S. patent 3,226,633.)
Referring to Figure 2, the sonde response function g
(z) can be described as having a main lobe spanning a length of the borehole and symmetrical non-zero sidelobes which extend outwardly from the main lobe with tails that decrease in amplitude with
increasing distance from the measure point. As the conductivity increases, these non-zero sidelobes increasingly become more negative with the main lobe decreasing in amplitude. These large negative
lobes cause "horns" to appear on the log as the sonde passes from a region of high conductivity to one of low conductivity, and vice versa.
The sonde response function for an induction logging tool would ideally be a delta function
(z-z') that maps the conductivity of an infinitesimally thin sheet of formation into the measured value of σ
(z) at each measurement depth. As Figure 2 shows, the sonde response function for any realizable sonde is far from ideal, and each measurement will include the contributions from a volume of
formation many feet thick.
Although the sonde response function does not describe an infinitesimally thin sheet of formation, but rather includes contributions from the conductivity of formations several feet in thickness,
there could possibly exist an operator h(z) which would map the sonde response function g(z) into the ideal delta function 6(z-z'). Thus, an expression for a(z-z') could be written as:
Equation 12 can be rewritten in the frequency domain
by performing a Fourier transform of both sides of equation 12, yielding:
Assuming that the conductivity is constant radially (non-invaded beds), in a homogenous medium, the apparent conductivity will be given by:
where z' is the axial distance from the center of the sonde and σ
(z-z') is the true formation conductivity. Equation 14 is recognized as being of the form of the convolution integral of a linear time-invariant filter.
The Fourier transform of Equation 14 may be taken:
where the spatial frequency,ω, equals the reciprocal of distance. The article by C.F. George, et al., appearing in the February 1964 issue of Geophysics, entitled "Application of Inverse Filters to
Induction Log Analysis," shows applying Fourier transforms to equations which characterize induction logging, and obtaining inverse filters to improve data processing of induction logs.
From Equation 13, if the ideal system transfer function Δ(ω) is substituted for G
(ω) of equation 15, the apparent conductivity Σ
(ω) should equal the transformed true conductivity Σ
(ω). Thus, equation 15 becomes:
Referring to Equation 16, if H(ω) is equal to the reciprocal of G
(ω), the measured conductivity, Σ
(ω), will equal the formation conductivity Σ
(ω). Figure 5 shows the Fourier transform of g
(z), G
(ω), for a typical induction logging sonde (see for example the sonde disclosed in U.S. patent 3,067,383.) The problem with defining H(ω) as equal to the reciprocal of G
(ω) is that G
(ω) vanishes at certain values of ω , leaving H(w) indeterminate. The specific values ω for which G
(ω) = 0 are sometimes referred to in the art as "blind" frequencies.
An H
(ω) can be mathematically described as:
where the frequency ω
is less than the first blind frequency. A target transfer function T
(ω) can be defined as:
In other words, T
(ω) is the ideal low-pass filter curve shown in Figure 5.
For the target transfer function T
(ω), as given in Equation 18, the calculated resistivity values for an assumed profile of resistivity of the formation as shown in Figure 6 will result. The target transfer function T
(w), being defined as an ideal low-pass filter, has the property of having the widest bandwidth for a particular cut-off frequency, but it suffers from the Gibbs phenomenon represented by overshoot
and ringing (ripple) in the calculated resistivity values, clearly illustrated in Figure 6. Where such ripple is present in the calculated resistivity values in the presence of abrupt changes in the
resistivity of the formations, the induction logging system would not be able to obtain accurate and precise readings of the formation conductivity or resistivity over a wide dynamic range, although
some measure of improvement in the shoulder effect is obtained.
To obtain accurate and precise readings of conductivity over a wide dynamic range, however, the ripple illustrated in Figure 6 must be eliminated. The present invention minimizes Gibbs phenomenon by
replacing the target transfer function defined by Equation 18 with a target transfer function TK(W) such that when t
(z), the inverse Fourier transform of T
(ω), is convolved with the formation conductivity profile σ
(z), the resulting log, σ
(z), will have minimum ripple in response to step changes in formation conductivity.
In the preferred embodiment of the present invention, the target transfer function T(ω) is a Kaiser window function. Kaiser window functions are known in the art of finite-duration impulse response
(FIR) digital filters. (The article appearing in Vol. 28, No. 1, of the IEEE Transactions on Acoustics, Speech and Signal Processing, February, 1980, and entitled "On the Use of the I
-Sinh Window for Spectrum Analysis," pages 105-107, discloses a Kaiser window function.)
Although a Kaiser window function is disclosed for the target transfer function to reduce the shoulder effect, the present invention is not limited to only this use. The target transfer function
could be any function which will perform a transformation to obtain any desired system response function based on the geometrical factor. For example, a target transfer function could be choosen to
produce a system response function which transforms medium depth conductivity measurements to appear as deep investigation log measurements.
Figure 5 illustrates both the Kaiser window function K(
) and the filter transfer function H
(ω) which results from the ratio of T
(W) to G
(ω) for frequencies ω less than or equal to ω
Using techniques well-known to those skilled in the art, such as the Remez exchange method, it is possible to determine a linear-phase finite digital impulse response filter which implements the
spatial domain filter function h(z) obtained from the inverse Fourier transformation of H(ω) of Equation 19. The Remez exchange method appears in the book entitled "Theory and Applications of Digital
Signal Processing" by Rabiner and Gold, pp. 187-204 (1975).
Figure 7 is an illustration of the sonde response function g
(z) for the typical focused coil system, the response functions of' which are shown in Figure 2. The zero formation conductivity g
(z) curve is shown with the spatial filter function h
(z) obtained from equation 19 and the system transfer function f
(z) as given by
where the
symbol represents the convolution operation. For the preferred embodiment of the invention, TABLE 1 (appearing at the end of the specification) illustrates the coefficients for a digital
implementation of h
(z) according to the Remez exchange method. The filter implementation is symmetrical about a center coefficient and contains a total of 199 terms.
It can be seen from Figure 7 that in the system response function f
(z), the main lobe has been increased in amplitude and sharpened (narrowed), and the sidelobes attenuated with the sidelobe tails rapidly dying out to zero. Thus, the contributions from currents
flowing in the formations spaced apart from the measurement depth are substantially attenuated when the filter h
(z) is used to transform the measurements σ
(z) into the calculated conductivity values σ
(z) for the log. Figure 8 illustrates the dramatic improvement in the calculated resistivity values by the application of the deconvolution filter h
(z) according to the present invention to the measured conductivity values σ
(z), where the conductivity readings are small (little skin effect). Figure 9 illustrates the system response functions which result from the application of the deconvolution filter method to the
plurality of sonde response functions as illustrated in Figure 2.
Thus far, the method for obtaining a deconvolution filter based on the geometric factor theory has been described. That is, a method in which the skin effect phenomenon is negligible, σ
(z) is small. The techniques of determining a deconvolution filter can, however, be obtained for the sonde response function g
(z) (see Figure 2) obtained at any given conductivity value. Thus, each of the curves illustrated in Figure 2 could be processed to obtain a deconvolution filter for processing induction measurements
(z). However, accurate values for the calculated conductivity are only possible when using these filters if the true formation conductivities are essentially equal to the same conductivity value
which yielded the sonde response function g
(z) used in obtaining the applied deconvolution filter h(z). If the conductivity values vary by any significant amount from that value, the deconvolution filter thus applied (and the deconvolution
filter obtained for 9
(z) with σ
(z) significantly greater than 0) will produce erroneous values for the computed conductivity Q
As previously discussed, the sonde response function g(z) depends on the formation conductivity in a highly non-linear manner. This dependence is referred to as skin effect, and causes large errors
in the deconvolution at high conductivities, regardless of what method of reducing shoulder effect is used. (See U.S. Patent No. 3,166,709 for a prior-art method to reduce the shoulder effect
phenomenon. U.S. Patent 3,166,709 is incorporated herein for all purposes.) Thus, the ideal situation should not only have a system response function which represents an infinitesimally thin sheet of
formation, it should be constant regardless of the formation conductivity. In accordance with the present invention, one method of obtaining a constant system response function is to adapt the
deconvolution filter h(z) obtained according to the inverse Fourier transform of Equation 19 to the conductivity of the formations.
In adapting the deconvolution filter method, there are two basic approaches. The first approach is to determine various deconvolution filters for different conductivity ranges. Based on a control
signal, an appropriate deconvolution filter may be selected from among the plurality of deconvolution filters and applied to the log measurements. For example, deconvolution filters may be obtained
for the sonde response functions obtained for conductivities of 1, 500, 1,000, 3,000, 5,000 and 10,000 mmho/m. Figure 9 illustrates the system response functions which result from the deconvolution
filters obtained for the sonde response functions measured at 1, 500, .1000, 3000, 5000 and 10,000 mmho/m. For the control signal, the quadrature component of the conductivity measurements σ
(z) is used since the value of this component is dependent on the skin effect.
A second approach for adapting the deconvolution filter method of reducing shoulder effect to also reduce skin effect is to continually adapt the coefficients of the digital deconvolution filter
implementation of each deconvolution filter h(z) based on a control signal, such as the quadrature-phase component σ
(z). For this method, a digital filter implementation of the deconvolution filter response function h(z) is determined in which the digital filter has a fixed number of coefficient values. For a
plurality of formation conductivities, the deconvolution filters are implemented by this digital filter. From these implementations, each coefficient will have a plurality of values defining a
coefficient function, one value of the function taken from each filter implementation.
By curve fitting the best curve to the plurality of values for each coefficient obtained from the implementation of the plurality of deconvolution filter response functions, it is possible to obtain
a plurality of coefficient functions in which the value of each coefficient is now a function of a conductivity variable. Since the quadrature-phase component of the induction log measurements is
dependent on the magnitude of the skin effect, G
(z) can be used as the conductivity variable in the coefficient functions. It is then possible to continuously determine the coefficients of the best deconvolution filter to use in calculating the
resistivity log measurements.
Skin Effect Correction
For any formation, the "skin effect error signal," σ
(z), may be defined as the difference between the actual measured signal σ
(z) and the geometrical factor signal defined by Equation 9,
In simple formation geometries, σ
(z) can be easily computed. At a given measure point, Equation 21 can be rewritten as the mapping integrals
where the functions g
(z, σ
) and g(z,
) are the response functions which respectively map the formation conductivity σ
into the skin effect term σ
(z) and the measured conductivity σ
It can be shown from the definitions of the response functions of Equation 22 that the same relation as given for the measurements in Equation 21 holds for the response functions,
Because the measured signal σ
(z) is a complex quantity ( σ
(z) + i σ
(z)) and the geometrical factor signal is perforce real, the error signal must also be complex. Therefore, using Equation 21, the following obtains:
Furthermore, the imaginary part of the error signal contair all the imaginary information, so we may set σ
(z) = - σ
(z). (For this derivation, the direct mutual coupling, which appears as a part of the X-signal, has been ignored.) The real parts then are
which is identical to the skin effect signal development disclosed in the Moran article. A similar development to Equation 22 gives a relation between the corresponding response functions:
The function g
(z), computed using Equation 5 for a typical focused coil sonde commonly used in practice (see U.S. patent 3,067,383), is shown in Figure 2. The function g
(z) is also shown for comparison. Figure 10(a) shows the error response function g
(z) for several values of conductivity. The curves of Figure 10(a) are obtained from the curves of Figure 2 by taking the difference between g
(z) and the g
(z) at selected value of a
in accordance with Equation 23. ,
Although the mapping process involved in the induction measurement (Equation 7) is not a proper convolution, and no deconvolution exists per se, any filter may be applied to the sequence of
measurements σ
(n), where n represents the n
sample of σ
(z) in a sequence of sampled log measurements. If h(n) represents a digital FIR filter of length N and is designed as an inverse filter for g
(n), the application of h(n) to the sequence of measurements σ
(n) is expressed by the convolution sum,
where σ
(j) is the filtered measurement at the j
sample. Substituting for σ
(n) from Equation 25 gives
This expression contains the term h.* σ
(where * as mentioned above denotes the discrete convolution operation) which is the deconvolved signal free of skin effect. Equation 28 shows that application of the deconvolution filter to the
in-phase R component signal can be thought of as a proper deconvolution of σ
(n) plus the deconvolution error term on the right, the deconvolution of σ
Let g(z) represent any mapping function of the type g(z, σ
), such as g
(z, σ
), or g
). Application of h(n) to the signal resulting from mapping OF with g(z) gives:
where z is the position of the n
Since the filter coefficients for h(n) are constants, the summation can be taken into the integral (as long as σ
remains fixed). Thus, Equation 29 becomes:
If a sequence of samples, {g(n)}, is taken of an arbitrary response function g(z), then the operation
describes the filtered response, and can be determined. Figure 9 shows the result of such an operation on samples of the functions g
(z, σ
) shown in Figure 2. Figure 11(b) is an illustration of the set of functions representing the deconvolution error response function f
(z), which is derived from applying h(z) to the functions g
(z) shown in Figure 10(a).
It has been disclosed by both the Moran patent and the Moran article that for a homogeneous medium the imaginary, or X-component signal, contains much of the information lost due to skin effect. The
measured signals can be written as expansions in the parameter L/b, where
is the skin depth.
The corresponding sonde response functions for the expressions in Equations 33 and 34, g
(z) and g
(z), can be derived from Equation 4 for a homogeneous medium and is similar in form to Equations 33 and 34 with terms L/b . Figure 10(b) shows the sonde response function g
(z) for the same sonde as shown in Figures 2 and 10(a). A comparison of figures 10(a) and 10(b) shows a marked similarity between the curves.
Referring again to Figure 2, the most obvious difference between g
(z) and g
(z) is the relative loss of information far from the sonde. This far-field loss reappears in both g
(z) and f
(z) (Figures 10(a) and 11(b), respectively). This information also appears in g
(z), (Figure 10(b)) although reduced relative to the central value. In accordance with this similarity between g
(z) and g
(z), the present invention fits g
(z) computed at a given δ
in a homogeneous formation to the correspond- in
(z). One technique for obtaining this desired transformation is to use a finite impulse response (FIR) filter. However, because the difference between g
(z) and f
(z) is also a function of conductivity OF (this is illustrated for the homogeneous medium by Equations 33 and 34), the fitting of g
(z) to f
(z) at different conductivity levels requires a different fitting filter.
Although some of the details of these two filters are different, the main averaging lobes of the two will be quite similar since the major job of the filter transformation is to increase the
contribution from parts of the formation at large distances from.the sonde. The gains of the two filters are different, since the ratio of σ
(z) to σ
(z) at, for example, 1000 millimhos is different from the ratio at 10,000 millimhos. If the ratio of σ
(z) to σ
(z) is designated as α(σ
(z)), and the appropriate filter is normalized to unity gain, then the following identity is obtained:
where b(n) is the fitting filter with gain and b
(n) is the same filter normalized to unity gain. This allows the desired transformation of g
(z) into f
(z) to be done in two steps: a shape transformation followed by a magnitude transformation. The magnitude fitting simply makes sure that the area under the filtered g
(z) and f
(z) curves are equal.
The fact that the normalized filters at two rather widely separated conductivity levels are similar suggests that the main fitting transformation is the central lobe of the filter. For the present
invention, a normalized "average" of two filters, obtained at two widely separated conductivities, is determined. This average filter, b
(n), is applied to g
(z) at all conductivity levels. At each value of conductivity, the filtered g
(z) is compared to the corresponding f
(z) at that-same conductivity level. From these comparisons, the values of a boosting function α(σ
(z)) needed to adjust the magnitude of the transformed g
(z) to be equal to f
(z) are obtained. These values are then curve fitted to obtain the function which best fits the filtered g
(z) to f
(z). For the preferred embodiment of the invention α(σ
(z)) is fitted to a power series expansion of σ
(z) having seven coefficients as given in Table 3. Table 2 illustrates a 199 term digital filter implementation of b
(n) according to the Remez exchange method mentioned above for the sonde response functions illustrated in Figure 2.. The coefficients of this implementation are also symmetrical about a center
The expression contained in Equation 28 for the geometrical factor conductivity measurement at location j can be expressed as
where σ
(j) is free of skin effect. Using Equation 25, Equation 36 may be rewritten as:
where the term on the right is the deconvolution error term.
From the transformation of g
(z) into f
(z) and the boosting function α( σ
(j)), the following approximation obtains:
The approximation of the skin effect error term by the transformed a
measurement yields a corrected deconvolution measurement,
where σ
(j) is defined as a phasor deconvolution conductivity measurement.
Equation 39 may be rewritten in terms of deconvolution to obtain
where fp(z) is the system response function for the logging system. A plot of the term α(σ
(z)] for the logging sonde having the sonde response curves of Figure 2 is shown in Figure.11(a).
Turning now to Figure 12, a block diagram illustration of an induction logging system which implements the phasor processing of the present invention, as given by Equation 39, is shown. An induction
logging tool 30 is shown suspended in a borehole 26 by a wireline cable 28. The induction tool 30 includes a sonde 36 (for purposes of illustration, a simple two-coil sonde is shown) having a sonde
response function g(z, σ
) which maps the formation conductivity σ
(z) into the log measurements. Tool 30 also includes a phase sensitive detector 32 which responds to signals from the transmitter oscillator 34 and the receive signal from receiver R to generate the
in-phase, Q
(z, σ
), and quadrature-phase, σ
), components for each log measurement. One such tool which obtains very accurate measurements of quadrature phase components is disclosed in the U.S. patent application serial No. 271,367 which is
incorporated herein. Although a logging tool which generates both an in-phase and a quadrature-phase component for each log measurement is shown in Figure 12, certain aspects of the present invention
are equally applicable to a tool which generates only an in-phase measurement. While Figure 12 shows a tool with a single phase sensitive detector for generating the phase quadrature components of
each conductivity measurement, a tool having two phase detectors could be used to generate the two phase components to be processed by the present invention.
Still referring to Figure 12
a processing unit 12 for processing the induction measurements obtained by tool 30 is shown. A demux 16 separates the two components of each log measurement received from tool 30. The in-phase
component is applied to deconvolution filter means 18 and provisionally to summation means 24. The quadrature-phase component is applied to linear filter means 20. Deconvolution filter means 18
implements a filter response function h(z) based on the geometrical factor response function g
(z). The derivation of the filter function h(z) is presented above, and for the preferred embodiment of the invention, h
(z) represents a filter function which, when convolved with g(z) of a typical focused coil sonde, produces a system response function having a sharpened central lobe and decreased sidelobes.
The output of filter 18 is a deconvolved conductivity measurement σ
(j) and represents a processed measurement in which shoulder effect has been reduced. The output from filter 18 is applied to summation means 24, and to recorder 14 for possible recording as a
processed log. Provisionally applied to summation mears24 is the in-phase component measurements from demux 16. When used in conjunction with the phasor processing of the quadrature-phase component σ
(j), an improved induction log may be obtained either by summing Gp(j) with σ
(j) or with c
(z, σ
) directly, where the phasor processing of the present invention reduces skin effect in the processed measurements.
Processing unit 12 also processes the log measurements to reduce skin effect by filtering the quadrature-phase measurements in a non-linear filter means comprised of a linear filter means 20 and an
amplifying means 22. Filter means 20 implements the linear filter response function b
(z) as given above. The output-from filter means 20 is boosted by amplifier means 22 according to a predetermined non-linear boosting function α(σ
(z, σ
)), which varies as a function of the measured quadrature-phase component σ
. The processed measurements on line 23 from amplifier 22 represent skin effect error correction factors which, when summed in summation means 24 with either the deconvolved conductivity measurements
(j) or the σ
(z, Q
) measurements directly from tool 30, results in a phasor processed conductivity measurement Op(j) with reduced skin effect. The output from summation means 24 is applied, along with σ
(j), to recorder 14 for recording as an induction log trace. For the preferred embodiment, processing unit 12 is a general purpose programmed computer.
Referring now to Figure 13, a plot of the constant system response function fp(z) obtained according to the present invention for the sonde response curves shown in Figure 2 at several conductivities
is shown. The system response represented by fp(z) is virtually constant over the range from 1 mmho to 10,000 mmhos, and possesses the desired characteristics of a sharpened and increased central
main lobe with side lobes that have been reduced to near zero.
Turning now to Figure 14, a short section from an actual field induction log illustrating the improved accuracy of the phasor processing of the present invention is shown. The log shown in Figure 14
was obtained using the induction tool whose transfer functions are shown in Figure 2. Five traces are illustrated in Figure 14, the SP (spontaneous potential), the SFL electrode tool trace, the ILM
(a medium depth induction tool), IL
(a normal depth induction tool) and the IL
measured data processed according to the present invention (phasor processed trace). The benefits of the present invention are dramatically illustrated in the section identified as region A. In this
region, the prior-art methods of processing the log measurements identified by the curve IL
is indicating a low resistivity value while the phasor processed IL
data is essentially overlaying the SFL curve. In the A region, the SP curve is essentially showing no change. This absence of change in the SP curve indicates that there is very little invasion of
the bed at region A. Thus, the indication of the SFL electrode tool, which does not suffer from shoulder effect, is accurately representing the true resistivity (or its inverse, conductivity) of the
The benefit of the deconvolution and skin effect phasor correction of the present invention is also illustrated in region A because of the large contrast between the high resistivity values occurring
in the region A and the high conductivity shoulder beds located to either side. Because the main lobe of the sonde response function spans not only region A but also the shoulder beds to either side
(this is even true in the case of the compensated system response function of the present invention), the high conductivities of the shoulder beds, which are experiencing significant skin effect, are
making a significant contribution to the conductivity reading obtained in the region A. But even in the presence of skin effect from the high conductivity shoulder beds, the phasor processing of the
present invention more accurately determines the conductivity of the low conductivity bed located at region A.
Referring now to Figure 15, a plot of fp(z) of a standard focused coil sonde, such as that represented by the sonde response functions of Figure 2, in a layered formation is shown. Each curve
illustrated in Figure 15 is displayed as a "snapshot" of the response at the location the curve is illustrated.. The dotted curves represent the sonde responses as processed by a typical prior-art
technique, i.e., a fixed deconvolution followed by boosting to the correct geometrical factor theory value. The solid curves represent the constant system response obtained according to the phasor
deconvolution. processing of the present invention. The improved results of the present invention are clearly evident in Figure 15 from the nearly constant system response function as the tool passes
through a thin layered bed, as compared to the varying responses of prior-art systems.
Summarizing the present invention, a method and system for processing an induction log to reduce the unwanted contributions from currents flowing in formations spaced apart from the measurement depth
(shoulder effect) and the effects of the non-linear variations in the sonde response function with linear variations in formation conductivity (skin effect) is disclosed. To compensate for the
shoulder effect, the sonde response function obtained at zero conductivity (the geometrical factor) is Fourier transformed into the frequency domain. A window function is applied to the transformed
zero conductivity sonde response function to pass only those spatial frequencies of the transformed function up to a predetermined upper frequency. This upper spatial frequency is chosen to be less
than the first frequency at which the transformed zero conductivity sonde response function first goes to zero.
A target function is chosen so as to produce any desired response function of the logging tool, such as a system response function having reduced side lobes to reduce shoulder effect. The ratio of
the target transfer function to the transformed and truncated sonde response function is formed and its inverse. Fourier transform taken to obtain a spatial filter function that produces minimum
ripple in the processed log. A Kaiser window function is disclosed as a preferred target transfer function to obtain the desired reduction in the shoulder effect. The reduction in the unwanted
contributions from the currents flowing in formations spaced apart from the measurement depth occurs when the spatial filter function obtained from the deconvolution method is convolved with the
in-phase component measurements of the log measurements.
To minimize skin effect, the present invention discloses a method and system of phasor processing of the induction log measurements in which quadrature-phase components CFx(z) are filtered by a
filtering function that produces skin effect error correction components. These components are added to the deconvolved in-phase component measurements σ
(z) (shoulder effect corrected) to produce a processed log that includes both shoulder effect and skin effect corrections. The skin effect correction may be made independently of the shoulder effect
correction to obtain an improved log. Skin effect error correction components may be added to the in-phase components uncorrected for shoulder effect. Additionally, the deconvolution method disclosed
for the shoulder effect correction when used in conjunction with the phasor processing may be used to achieve any desired tool system response function when based on the geometrical factor sonde
response function. Also, the prior-art techniques of shoulder effect correction and log processing, such as that disclosed in U.S. Patent 3,166,709,could be used to process the in-phase component
measurements prior to summation with the skin effect correction components.
|
{"url":"https://data.epo.org/publication-server/rest/v1.2/publication-dates/19830720/patents/EP0084001NWA2/document.html","timestamp":"2024-11-11T06:18:17Z","content_type":"text/html","content_length":"129419","record_id":"<urn:uuid:342bed6c-4347-4703-aa55-9cc7fc06e67a>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00058.warc.gz"}
|
All About Treating Acne - Home Remedies, Management, & Homeopathic Remedies
All About Treating Acne – Home Remedies, Management, & Homeopathic Remedies
Acne is one of the common skin conditions affecting several individuals round the world. It is affecting around 85% of people at some time in their life. Most often, it is observed affecting teenage
boys and girls.
Conventional and some types of treatments for acne are quite expensive and may cause side-effects, such as dryness, redness, and irritation of the affected area. This may lead to a loss of confidence
and cause embarrassment in facing the crowd in most of the acne patients.
Acne – How it Happens?
Acne occurs when the skin pores get clogged with oil and dead skin debris. In simple words, each skin pore is connected to the sebaceous gland. This gland in the skin produces sebum which is an oily
substance. Sometimes, extra sebum is deposited in these pores causing some bacterial growth, mainly Propionibacterium acnes or P. acnes. Inflammation normally develops deep inside the hair follicles
or pores and produces cystic-like lumps just beneath the surface of the skin. Acne may also occur due to excessive activity of a type of Androgen hormones.
White blood cells in the blood, also called leucocyte, attack P. acnes causing skin inflammation, and, thus, lead to acne.
Symptoms of Acne:
Common symptoms of acne are listed below.
Small red tender bumps
Pimples (pustules) which are papules with pus at their tips
Large solid and painful lumps which develop beneath the skin surface (nodules)
Painful, pus-filled lumps beneath the surface of the skin (cystic lesions)
Usually, acne occurs on the face, forehead, shoulders, chest, and the upper back because these areas of skin have the most oil or sebaceous glands as compared to other parts.
Risk Factors which Triggers Acne:
Age group – Mostly, it affects in the teen ages.
Hormonal changes in the body.
Family and hereditary chances.
Applying greasy or oily applications.
Friction or pressure from objects, such as telephone, cell phones, helmets, tight collars, and backpacks.
Factors that Worsen Acne:
Hormones: Mainly androgens, normally, which increase during the puberty period in both boys and girls can worsen acne.
Drugs: Likely to contain corticosteroids, testosterone, & lithium.
The Diet that includes:
Skimmed milk
Carbohydrate-rich food: Bread, bagels, & chips.
Chocolate – not for all cases.
Stress and Anxiety
11 Home Remedies for Acne:
Here are some easy yet effective home remedies for acne.
1. Apply apple cider vinegar:
Apple cider vinegar is made by the fermentation of apple cider or with the unfiltered juice of pressed up apples. Organic acids are the main content of apple cider vinegar. The use of apple cider
vinegar also helps in preventing scarring. It should be used in small amounts, as a larger dose may cause skin burns. It should be used in 1:3 parts of water and apply with a cotton ball on the
affected area for 5-20 seconds twice daily.
2. Have Zinc supplements:
Zinc (Zn) is essential for cellular growth, the production of hormones, body metabolism, and good immunity of an individual. For acne and maintaining good health 40 mg per day is the recommended dose
of zine supplements which you should have with the guidance of the physician.
3. Honey and cinnamon mask application:
Honey and cinnamon are excellent sources of antioxidants, which are found to be more effective than benzoyl peroxide and retinoids. Make a paste of both honey and cinnamon in 2: 1 proportion, apply
it on your clean face for 10-15 minutes and wash your face later with water.
4. Treatment with Diluted Tea Tree oil (Melaleuca alternifolia):
Tea tree oil has the power to fight against bacteria and prevent skin inflammation. The oil is very strong, so it needs to be diluted before applying to your skin. Mix tea tree oil with water in 1:9
proportion, apply the mixture on the affected area with a cotton swab and moisturize your skin if needed. Apply diluted tea tree oil 1 or 2 times every day.
5. Have Green tea:
Green tea works as an antioxidant and is highly beneficial for the good health of your skin. Steep green tea bags in a cup of boiled water for 3 to 4 minutes, let the mixture cool and apply the
liquid on your affected skin by using cotton balls. Apply the remedial liquid once or twice daily.
6. Applying Witch-hazel:
Witch-hazel is extracted from bark and leaves of hamamelis virginiana, which has anti-bacterial/inflammatory properties. Mix 1 teaspoon of hazel bark in 1 cup of water for about 30 minutes and let it
boil on the stove. Let it sit for an additional 10 minutes. Stain and seal container containing the liquid. Apply the remedial liquid on your skin with a cotton ball 1-2 times daily.
7. Use Aloe Vera:
Aloe Vera is highly beneficial for skin health. It is a clear gel, and you can apply it to acne for better results. Apply it on your skin 1-2 times every day.
8. Take Omega-3 fatty acid (fish oil) supplements:
The intake of omega-3 fatty acids will reduce acne. So, have omega-3 fatty acid supplements.
9. Exfoliate regularly by using a scrub and washing face.
10. Follow a diet with low glycemic load
Following a diet with low glycemic load helps reduce acne. So, include foods, such as fruits, vegetables, legumes, nuts, and minimally processed grains, in your diet. Reduce the intake of dairy
11. Exercise regularly and be stress-free.
Homeopathic Remedies for Acne:
Homeopathy plays a key role in the treatment of acne/pimples in the teenage boys and girls as well as individuals of any age groups. Here are some effective homeopathic remedies for acne.
Few of the remedies with indication are mentioned below:
Pulsatilla: Pulsatilla is quite effective in treating acne that gets worse after eating fatty food and those that get aggravated by warmth and heat. It is recommended to the acne patients, who are
around the puberty age. Pulsatilla is also effective in treating acne outbreaks during the menses.
Silicea: Silicea is effectual for treating deep-seated acne. It is ideal for treating a person with low immunity and tend to experience fatigue and nervousness.
Sulphur Iodatum: This homeopathic remedy is recommended for treating acne occurring on forehead and back and which get aggravated by heat.
Calcarea Carb: Calcarea Carb is recommended for an acne patient with frequent skin eruptions and pimples. This homeopathic remedy is effective in treating acne remaining with scars on the skin.
Hepar Sulph: Hepar Sulph is best suitable for treating pustular acne which gets aggravated by the cold weather.
All these things explained in detail may help you in preventing or treating acne. If your acne is more intense, it is advisable to approach a physician for the further treatment procedure. So, if you
suffer from acne any time, count on homeopathy without a second thought and get your acne treated effectively and safely.
Written by Dr. Vinay Ram. C, Associate doctor to Dr. Rajesh Shah
80 Comments
1. order lipitor 80mg pills cost lipitor 40mg brand lipitor
2. purchase proscar generic propecia medication buy generic diflucan 200mg
3. baycip where to buy – purchase amoxiclav without prescription amoxiclav pills
4. ciprofloxacin over the counter – order keflex 125mg generic clavulanate cost
5. buy metronidazole 400mg online cheap – purchase amoxil pill order azithromycin 250mg without prescription
6. ciplox us – order chloromycetin pills order erythromycin
7. valtrex 1000mg cheap – where can i buy nemasole purchase zovirax generic
8. ivermectin and covid – brand ceftin 500mg order tetracycline generic
9. buy metronidazole 400mg generic – order flagyl generic how to buy azithromycin
10. brand ampicillin order penicillin pill buy amoxicillin pill
11. buy lasix pills for sale – prazosin 1mg uk buy captopril 25 mg
12. metformin 500mg sale – purchase lamivudine sale cheap lincocin 500mg
13. generic retrovir – buy glycomet 1000mg pills buy cheap generic allopurinol
14. clozaril 50mg cheap – order pepcid 20mg generic cost pepcid
15. seroquel order – buy eskalith pill order eskalith pill
16. purchase anafranil online – buy generic imipramine over the counter doxepin for sale online
17. atarax for sale online – prozac 40mg cost buy generic endep
18. amoxiclav sale – zyvox price oral cipro 500mg
19. purchase amoxil pills – buy amoxicillin pill cipro 1000mg generic
20. cleocin order – oxytetracycline 250 mg without prescription buy chloromycetin tablets
21. purchase zithromax for sale – where to buy ciprofloxacin without a prescription ciprofloxacin 500 mg for sale
22. buy albuterol 2mg pills – buy generic advair diskus for sale theo-24 Cr 400mg price
23. buy generic clarinex online – generic clarinex brand ventolin
24. methylprednisolone 16mg without a doctor prescription – buy cetirizine 10mg generic order azelastine 10 ml online cheap
25. micronase 2.5mg without prescription – buy generic glipizide for sale forxiga 10 mg cheap
26. order prandin 1mg – buy empagliflozin 25mg for sale purchase jardiance pills
27. glucophage price – acarbose for sale online acarbose ca
28. buy lamisil cheap – lamisil uk buy griseofulvin for sale
29. cost semaglutide – order glucovance pill buy desmopressin without prescription
30. ketoconazole over the counter – itraconazole generic sporanox 100 mg generic
31. famvir 500mg price – acyclovir 800mg without prescription buy valaciclovir tablets
32. brand digoxin – buy generic irbesartan over the counter lasix 100mg generic
33. buy hydrochlorothiazide 25mg sale – order microzide 25mg pill zebeta cheap
34. order metoprolol 100mg generic – order lopressor 50mg online cheap adalat 30mg generic
35. buy generic nitroglycerin over the counter – order lozol online cheap diovan 80mg generic
36. zocor basil – fenofibrate sirius atorvastatin again
37. rosuvastatin first – pravastatin buy violet caduet pills descend
38. viagra professional muscle – viagra gold ceremony levitra oral jelly online single
39. priligy drip – sildigra nasty cialis with dapoxetine spring
40. cenforce online satisfaction – cialis no prescription brand viagra online because
41. brand cialis nobody – viagra soft tabs rain penisole compliment
42. cialis soft tabs online flicker – valif arise viagra oral jelly online terrible
43. brand cialis quarrel – forzest stage penisole soon
44. cialis soft tabs pills throw – cialis oral jelly pills hark viagra oral jelly online inspire
45. cenforce important – tadalis pills lazy brand viagra sympathy
46. priligy dart – cialis with dapoxetine wretch cialis with dapoxetine less
47. acne medication sense – acne medication monsieur acne treatment cease
48. inhalers for asthma still – asthma medication discuss asthma medication accord
49. uti treatment capable – treatment for uti audience uti antibiotics effect
50. prostatitis medications hungry – prostatitis treatment errand prostatitis treatment lay
51. valtrex online write – valtrex online suggest valacyclovir swell
52. dapoxetine always – dapoxetine sack priligy crash
53. claritin envelope – claritin pills parchment claritin happiness
54. ascorbic acid departure – ascorbic acid foreign ascorbic acid price
55. clarithromycin pills character – mesalamine large cytotec fireplace
56. fludrocortisone pills supply – fludrocortisone than prevacid pills graceful
57. dulcolax 5 mg canada – buy cheap generic liv52 buy generic liv52
58. order aciphex 10mg online cheap – motilium 10mg price order domperidone 10mg without prescription
59. buy cotrimoxazole for sale – cheap levetiracetam tobrex over the counter
60. buy zovirax without a prescription – duphaston pills order duphaston online cheap
61. buy griseofulvin 250mg pills – purchase fulvicin for sale order lopid online cheap
62. buy dimenhydrinate generic – dimenhydrinate 50mg cost risedronate 35 mg us
63. enalapril order online – buy doxazosin generic latanoprost
64. order monograph online – order cilostazol 100 mg online order pletal 100 mg online
65. piroxicam usa – buy exelon 6mg for sale rivastigmine 3mg drug
66. piracetam 800 mg pill – buy biltricide 600mg sale sinemet 10mg over the counter
67. purchase hydroxyurea without prescription – purchase trecator sc generic buy robaxin 500mg pill
68. buy divalproex no prescription – buy cordarone online cheap order topiramate 200mg generic
69. purchase norpace sale – purchase thorazine pills thorazine 100mg pills
70. cytoxan drug – buy trimetazidine online order trimetazidine online cheap
71. buy spironolactone 25mg generic – revia 50mg uk revia price
72. flexeril 15mg tablet – order flexeril 15mg online cheap enalapril brand
73. order zofran 4mg generic – eldepryl generic order ropinirole without prescription
74. ascorbic acid where to buy – how to get prochlorperazine without a prescription compro over the counter
75. order durex gel sale – order cheap durex condoms where can i buy zovirax
76. arava 10mg drug – order arava pills cheap generic cartidin
77. verapamil 240mg for sale – buy generic tenoretic over the counter buy tenoretic tablets
78. cheap atorlip pill – buy atorlip generic purchase bystolic generic
79. where to buy gasex without a prescription – oral gasex buy diabecon pill
80. order lasuna for sale – buy cheap generic himcolin himcolin where to buy
Leave a Reply Cancel reply
Related Posts
Exploring Homeopathy as a Complementary Approach for Epilepsy
The neurological disorder epilepsy, characterized by repeated seizures, significantly impacts...
Natural Gut Health: Homeopathy for IBS Sufferers
Living a life suffering from Irritable Bowel Syndrome (IBS) can...
Fatty Liver Disease Explained: Causes, Symptoms, and Homeopathic Care
Millions of people worldwide are suffering from fatty liver disease,...
Recent Posts
|
{"url":"https://www.askdrshah.com/blog/treating-acne-home-remedies-management-homeopathic-remedies-2/","timestamp":"2024-11-13T04:32:41Z","content_type":"text/html","content_length":"251303","record_id":"<urn:uuid:83399fe7-bb04-4e1d-9e39-6daaa58bf9cd>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00597.warc.gz"}
|
Nonconforming finite element approximation of crystalline microstructure
We consider a class of nonconforming finite element approximations of a simply laminated microstructure which minimizes the nonconvex variational problem for the deformation of martensitic crystals
which can undergo either an orthorhombic to monoclinic (double well) or a cubic to tetragonal (triple well) transformation. We first establish a series of error bounds in terms of elastic energies
for the L^2 approximation of derivatives of the deformation in the direction tangential to parallel layers of the laminate, for the L^2 approximation of the deformation, for the weak approximation of
the deformation gradient, for the approximation of volume fractions of deformation gradients, and for the approximation of nonlinear integrals of the deformation gradient. We then use these bounds to
give corresponding convergence rates for quasi-optimal finite element approximations.
• Error estimate
• Finite element
• Martensitic transformation
• Microstructure
• Nonconforming
Dive into the research topics of 'Nonconforming finite element approximation of crystalline microstructure'. Together they form a unique fingerprint.
|
{"url":"https://experts.umn.edu/en/publications/nonconforming-finite-element-approximation-of-crystalline-microst","timestamp":"2024-11-14T23:48:33Z","content_type":"text/html","content_length":"50858","record_id":"<urn:uuid:1c24f90e-0a15-48ac-9bc1-388ddf6c853c>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00665.warc.gz"}
|
318 dl to brcup - How much is 318 deciliters in imperial cups? [CONVERT] ✔
318 deciliters in imperial cups
Conversion formula
How to convert 318 deciliters to imperial cups?
We know (by definition) that: $1⁢dl ≈ 0.35195079727854⁢brcup$
We can set up a proportion to solve for the number of imperial cups.
$1 ⁢ dl 318 ⁢ dl ≈ 0.35195079727854 ⁢ brcup x ⁢ brcup$
Now, we cross multiply to solve for our unknown $x$:
$x ⁢ brcup ≈ 318 ⁢ dl 1 ⁢ dl * 0.35195079727854 ⁢ brcup → x ⁢ brcup ≈ 111.92035353457572 ⁢ brcup$
Conclusion: $318 ⁢ dl ≈ 111.92035353457572 ⁢ brcup$
Conversion in the opposite direction
The inverse of the conversion factor is that 1 imperial cup is equal to 0.00893492531446541 times 318 deciliters.
It can also be expressed as: 318 deciliters is equal to $1 0.00893492531446541$ imperial cups.
An approximate numerical result would be: three hundred and eighteen deciliters is about one hundred and eleven point nine two imperial cups, or alternatively, a imperial cup is about zero point zero
one times three hundred and eighteen deciliters.
[1] The precision is 15 significant digits (fourteen digits to the right of the decimal point).
Results may contain small errors due to the use of floating point arithmetic.
|
{"url":"https://converter.ninja/volume/deciliters-to-imperial-cups/318-dl-to-brcup/","timestamp":"2024-11-09T00:31:53Z","content_type":"text/html","content_length":"20015","record_id":"<urn:uuid:98eaf464-25d3-40ba-8764-211032eaeecf>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00750.warc.gz"}
|
Distributed algorithms in synchronous broadcasting networks
In this paper we consider a synchronous broadcasting network, a distributed computation model which represents communication networks that are used extensively in practice. We consider a basic
problem of information sharing: the computation of the multiple identification function. That is, given a network of p processors, each of which contains an n-bit string of information, how can every
processor compute efficiently the subset of processors which have the same information as itself? The problem was suggested by Yao as a generalization of the two-processor case studied in his classic
paper on distributed computing (Yao, 1979). The naive way to solve this problem takes O(np) communication time, where a time unit is the time to transfer one bit. We present an algorithm which takes
advantage of properties of strings and is O(n log^2 p + p) time. A simulation of sorting networks by the distributed model yields an O(n log p + p) (impractical) algorithm. By applying Yao's
probabilistic implementation of the two-processor case to both algorithm we get probabilistic versions (with small error) where n is replaced by log n in the complexity expressions. We also present
lower bounds for the problem: an Ω(n) and an Ω(p) bound are shown.
Bibliographical note
Funding Information:
in part by NSF grants ME-8303139 in part by NSF grant MC!%303139
ASJC Scopus subject areas
• Theoretical Computer Science
• General Computer Science
Dive into the research topics of 'Distributed algorithms in synchronous broadcasting networks'. Together they form a unique fingerprint.
|
{"url":"https://cris.haifa.ac.il/en/publications/distributed-algorithms-in-synchronous-broadcasting-networks","timestamp":"2024-11-09T11:05:12Z","content_type":"text/html","content_length":"53902","record_id":"<urn:uuid:28b1af2b-0e7c-4957-8a8c-aa8be29dc140>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00702.warc.gz"}
|
String propagation in black hole geometry
We consider string theory in the background of the two-dimensional black hole as described by the SL(2, R)/U(1) coset theory recently untroduced by Witten. We study the spectrum of this conformal
field theory, and give explicit representations for the tachyon vertex-operators in terms of SL(2,R) matrix elements. This is used to compute the scattering of strings of the black hole and to show
that the string propagator exhibits Hawking radiation. We further discuss the role of winding state and the appearance of bound states in the euclidean solution. We find that target-space duality in
the lorentzian theory interchanges the black hole horizon with the space-time singulartty. We conclude with a comparison a non-critical c = 1 string and its formulation as a gauged SL2,R WZW model.
All Science Journal Classification (ASJC) codes
• Nuclear and High Energy Physics
Dive into the research topics of 'String propagation in black hole geometry'. Together they form a unique fingerprint.
|
{"url":"https://collaborate.princeton.edu/en/publications/string-propagation-in-black-hole-geometry","timestamp":"2024-11-02T14:21:07Z","content_type":"text/html","content_length":"47238","record_id":"<urn:uuid:5e342123-6210-41c3-be60-21c71009dc19>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00459.warc.gz"}
|
Chain rule with one function containing two composite functions
Hmm... what you've written down is correct (though I'd suggest you to just skip all the fuss and just write the last two lines). What are you unclear of?
The strategy to differentiation in general is to (i) differentiate term by term; then (ii) use chain rule if needed. This is supposed to be a mechanical process.
Another minor note is that you should pretend you have infinite paper space. If you don't have enough space, simply ask for extra answer sheets rather than trying to cram everything (then correctly
signpost by saying "continue on extra answer sheet page blah" as a nice gesture). As a personal note, I write stuff even larger than yours (most of my pages contain 10 lines of writing at max), and I
have never gone into an exam without asking extra answer sheets.
It's not worth sacrificing clarity for the sake of saving trees.
(P.S. I googled "how many trees can you save by saving paper", and the result is kinda funny - but that's off-topic here)
Quick Reply
|
{"url":"https://www.thestudentroom.co.uk/showthread.php?t=7505243","timestamp":"2024-11-07T10:26:58Z","content_type":"text/html","content_length":"505236","record_id":"<urn:uuid:8410256b-df90-4c23-a762-1bc4e1a45281>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00588.warc.gz"}
|
Electron. J. Diff. Equ., Vol. 2012 (2012), No. 149, pp. 1-20.
Exact behavior of singular solutions to Protter's problem with lower order terms
Aleksey Nikolov, Nedyu Popivanov Abstract:
For the (2+1)-D wave equation Protter formulated (1952) some boundary value problems which are three-dimensional analogues of the Darboux problems on the plane. Protter studied these problems in a
3-D domain, bounded by two characteristic cones and by a planar region. Now it is well known that, for an infinite number of smooth functions in the right-hand side, these problems do not have
classical solutions, because of the strong power-type singularity which appears in the generalized solution. In the present paper we consider the wave equation involving lower order terms and obtain
new a priori estimates describing the exact behavior of singular solutions of the third boundary value problem. According to the new estimates their singularity is of the same order as in case of the
wave equation without lower order terms.
Submitted May 8, 2012. Published August 29, 2012.
Math Subject Classifications: 35L05, 35L20, 35D05, 35A20.
Key Words: Wave equation; boundary value problems; generalized solutions; singular solutions; propagation of singularities.
Show me the PDF file (334 KB), TEX file, and other files for this article.
│ │ Aleksey Nikolov │
│ │ Faculty of Mathematics and Informatics │
│ │ University of Sofia │
│ │ 1164 Sofia, Bulgaria │
│ │ email: lio6kata@yahoo.com │
│ │ Nedyu Popivanov │
│ │ Faculty of Mathematics and Informatics │
│ │ University of Sofia │
│ │ 1164 Sofia, Bulgaria │
│ │ email: nedyu@fmi.uni-sofia.bg │
Return to the EJDE web page
|
{"url":"https://ejde.math.txstate.edu/Volumes/2012/149/abstr.html","timestamp":"2024-11-10T07:04:55Z","content_type":"text/html","content_length":"2577","record_id":"<urn:uuid:0121984c-9643-4ffa-b9af-c2e2aa23b3a9>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00296.warc.gz"}
|
Monday, November 30th, 2020
The mean of a set of numbers $x_1, x_2 \ldots x_n$ is the unique value $\overline{x}$ which minimizes $\sum (\overline{x} - x_i)^2$, the sum of squared distances between it and the numbers. Why? The
mean seems very natural — for example, it's related to intuitive ideas about fairness — but the sum of squares less so. For instance, what's so special about squaring? Why not the sum of fourth
powers $\sum (\overline{x} - x_i)^4$ ? Or the sum of unsigned distances $\sum |\overline{x} - x_i|$ ?
My friend Dillon asked me this question. I felt I knew the answer, due to reading this post by Eric Neyman. But though I could do the algebra proving it, I wasn't able to convince Dillon (or myself,
after a few minutes) of it on an intuitive level. It still felt like there were unanswered questions. It was unsatisfying. This post is my attempt to write an explanation that you can "feel in your
|
{"url":"https://reallyeli.com/","timestamp":"2024-11-03T19:27:29Z","content_type":"text/html","content_length":"42140","record_id":"<urn:uuid:289b0a6b-e297-46ba-9bef-074ab414e7a0>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00813.warc.gz"}
|
Optimizing a Swim Meet: Traveling Salesmen and Asexual Mutants
You wouldn't think there is much to swimming analytics. Compared to sports like baseball or football, swimming is extremely deterministic. Swimmers tend to have a certain speed in each stroke, and
they vary only slightly around that tendency from meet to meet. There are no interactions with teammates, collisions with opponents, or bouncing balls to worry about. But it turns out that there's
more to
than meets the eye.
My kids are on a summer swim team in the local league. It's a great activity--exercise, a bit of healthy competition, and all four of my kids and step-kids are on the same team for one season out of
the year. It's a lot of fun for everyone.
It's complicated, though. There are five age groups for both boys and girls for a total of ten competition groups. There are 4 strokes (fly, back, breast, and freestyle). Each swimmer is assigned to
one of three classes for each stroke. The A class has the faster kids, the B class the next faster kids, and the C class has the rest. First, second, and third place in each stroke-class (for each
age/gender group) earn points for the team. The A, B, and C classes all count equally, in the spirit of the league. It's 5 points for a 1st, 3 for a 2nd, and 1 for a 3rd, regardless of class. This
way, nearly everyone's performance can affect the outcome of the meet.
There's only one strategic variable in the meet. Each swimmer can only swim in 3 of the 4 stroke events in the meet. In other words, each swimmer has to skip one of the four strokes for the day. The
manager of each team seeds the meet a couple days beforehand. You'd think that the best strategy is to have each swimmer participate in their 3 best strokes. But that's not the case.
Consider a swimmer who can take 2nd place in his age group/class in his best stroke, and who can take 3rd place in his worst. His manager may want to have him skip his best stroke if the team's next
fastest swimmers were poised to take 3rd and 4th, (sliding up to 2nd and 3rd). The swimmer's absence in his best stroke would not cost his team any points, and his addition in his slowest stroke
would add a point for his team. With 110 swimmers, 10 age/gender groups, and 3 classes in each group per team, there could be several convoluted and counter-intuitive possibilities--too many for any
one person to think through.
I thought this could be conquered quickly by using Excel's its 'Solver' optimization add-in. But after a lot of work I learned that Solver doesn't play nice with problems like this. The swim meet
scoring function is non-linear and non-continuous. There are only binary options (does a swimmer participate in a certain event or not?). Each option can have a large impact on the outcome. After
investigating and experimenting, I realized Solver just can't handle complex binary problems.
It's a fun problem, so I pursued it further by coding my own solution. The swim league isn't intended to be super-competitive, and each kid should have a fair opportunity to compete in each stroke.
But occasionally our team is challenged by our cross-town rivals for the championship, and the team wants to go all-out. With the swimmers' times posted online, all the necessary data is available to
There are two parts to the optimization problem. The first is computational complexity, and the second is strategic and related to game theory.
I needed way to run through all possible combinations of event participation for each group, and identify which combination provides the optimum score. This sounds easy, and it is except for one
thing--it's exponentially complex. There are 4 possible combinations of events for every swimmer in a group. If there are 6 swimmers in an age/gender group, that's 4^6 = 4,096 different combinations
we need to check for the optimum score. Any computer can do that in a flash. What if there are 8 swimmers in a group? That's 65,536, which is still very crunchable in a few seconds. What about 12
swimmers in a group, which is very common? That's 4^12 = 16,777,216 permutations, which takes the Advanced NFL Stats supercomputer over 2 hours to crunch.
The toughest challenges are caused by the largest groups. The 8-10 girls age group on our team has 20 swimmers, which means there are 1,099,511,627,776 combinations of swimmer-events, just for one
team. Uh-oh. That would take 5,461 days to crunch. We can do some triage to exclude some swimmers who rank low enough in every stroke/class not to matter, but directly computing the optimum would
still take far too much time. Using a compiled executable program or multi-threaded coding could also accelerate things, but not to the degree necessary for the larger groups.
From xkcd.com (If you get this right away, you're way ahead of me.)
Game Theory
The second part of the problem has to do with game theory. This isn't the typical engineering or transportation optimization problem. There's a thinking human on the other side of equation trying to
win. The value of my team's optimum lineup depends on the opponent's lineup and vice versa. In the optimization problem, I can only control my own team's lineup. Otherwise, the algorithm would make
the other team swim all their worst strokes.
The algorithm needs a starting point to represent the opponent's lineup of events. A team's lineup needs to be drafted without knowledge of the other's. There are a few options here. We can start
with a randomly generated opponent lineup, and then iterate. Once I've optimized against a random opponent lineup, I could optimize the opponent lineup vs my own team's, and then optimize my own
lineup again to counter the opponent's optimization. Presuming there isn't another sports-analytics-obsessed parent on the other team, this might be a good approach.
Further, we can continue to iterate until we reach a Nash Equilibrium where both teams are optimized against each other's optimizations, and no team could benefit from changing its lineup. Although
it's not realistic to ever expect that level of overkill by managers, it's still interesting to see where the equilibrium is and what score it represents. This approach is feasible for smaller groups
of swimmers, but not for larger groups--at least using the direct computational method. It requires the optimization to be run multiple times.
Another method is to start with an intuitively reasonable opponent lineup. A very good heuristic is to have each swimmer participate in his three most competitive strokes. This would be reasonably
optimum except for combinations like those mentioned previously--when it makes sense for a competitive swimmer to allow slower swimmers on his own team to score just as many points in order to allow
him to place in another stroke. I'm not the manager of our team, but I imagine this is how most managers seed their lineups.
A classic example of computationally complex optimization problem is the
traveling salesman problem
, aka the TSP. Consider a salesman who must visit 10 cities and return to his starting point. There is known time/distance between each city, and the salesman needs to visit each city exactly once in
the minimum amount of time. There are 362,880 combinations of paths he could travel. Add an 11th city, and there are now 3,628,800 combinations. Every day in the working life of a FedEx or UPS driver
is one big TSP.
One of the ways to crack the TSP is to use a
genetic algorithm
. The power of evolution's selection effect can be harnessed to quickly approximate the optimum solution, and often find the precise optimum solution. For extremely complex problems, the tradeoff
between computational time and the level of optimality usually makes sense. I thought this approach was super-cool, so I thought I'd try to apply it to the swim meet problem.
Here's how I approached the swim meet optimization problem using genetic concepts. In every age/gender group, think of the lineup of events that each swimmer skips as a genome--a strand of DNA. But
instead of GCTACTCAGGCA, it's Fly-Breast-Fly-Fly-Back-Free-Fly-Breast-Free... I created an original generation of 30 "individuals" (combinations of own-team swimmer-event lineups) randomly. Each
different combination of swimmer-event "DNA" will produce a fitness level, determined by the meet scoring rules.
Then I spawned a new generation of 30 individuals (lineups) based on three different methods. First, I kept the 10 best lineups from the previous generation, defined as producing the highest meet
scores. These are known as "the elite." The other 20 lineups die off and are replaced by children of the elite. This ensures the evolution process doesn't randomly regress into less fit populations.
Ten of the next-generation "children" are created by asexual mutation. Each of the elite is randomly mutated slightly to create a batch of new children, some of which may happen be more 'fit' than
their parent. [Yes, that's right. I wasted an unknown amount of time making imaginary mathematical '
asexual mutant children
The other ten children lineups were created sexually [I know how weird this sounds] by combining the genomes of two of the elite chosen randomly. In this way, the algorithm can eventually merge the
best sub-combinations into fitter and fitter solutions. This can provide an evolutionary leap out of a local maximum, where slight mutations can no longer improve outcomes, and onto a region of the
problem closer to the global maximum. The algorithm is repeated until it no longer produces any improvement in fitness (score) within a reasonable number of iterations.
I used a set of data from our team's closest meet this year as a test. I selected an age group that happened to have 12 swimmers from each team. This provided a useful test because it's enough
swimmers to make things tough but also solvable by brute force. That way I know the true optimum as the answer key for testing the genetic algorithm.
The brute-force optimization approach took 2 hours and 14 minutes to evaluate every possible lineup. The score was 46 for the Barracudas, our home team, to 62 for the Lightning, our rival. (We
happened to be outgunned in this age group.) This result was based on a single iteration optimization versus the 'intuitively reasonable' opponent lineup heuristic, where each opponent swimmer simply
swims his three most competitive strokes. It began from a standing start, initialized with a population of randomly chosen lineups.
The genetic algorithm took a fraction of the time of the brute force computation, averaging 17 seconds to reach optimum and then continue for another 1000 iterations in vain search for a higher
score. In the dozens of times I ran it, the algorithm never failed to produce a lineup worth the known optimum of 46 points. It reached the optimum after an average of 75 generations, which equates
to 75 * 30 = 2,250 potential solutions, and never exceeded 184 generations.
The impractically large group of 20 swimmers produced a (probable) optimum in a very steady average of only 24 seconds. I wrote probable because I'm not going to try to run the program for 12 years
to search all the possible combinations. The genetic algorithm consistently produced the same "best" score every time I ran it, but I can't prove it's the optimum. I think the reason it worked so
well with the very large group was because the larger groups still has the same number of swimmers as the smaller groups that matter in terms of scoring, and the rest of the swimmers can choose any
event combination without an impact on the meet score. (Kind of like real-world 'junk DNA', I guess.)
I could probably tune the constants in the model to improve the speed. The number of 'stall generations' allowed where there is no longer any improvement was set to 1000, which could be safely
lowered to quicken things. The biggest help would be to initialize the population with reasonably high scoring lineups instead of random ones, but that wouldn't have been a fair comparison with the
brute force method. The number of elites, mutants, and children could also be tweaked to improve things, but I'm satisfied with it as is. Besides, this is more of a learning exercise than a real
attempt to squeeze out an extra few points at the meet.
23 Responses to “Optimizing a Swim Meet: Traveling Salesmen and Asexual Mutants”
Anonymous says:
I enjoyed the article. Really interesting. First, there's the implication that a swim meet is, in part, a contest of each coach's optimization skills.
Second, you've described a dual meet. You could add another layer of complexity with an open meet, where three or more teams compete. (That gives you something to work on for the rest of
the summer. :)
One thing that wasn't quite clear to me: Were you optimizing both teams at once or optimizing your team vs. a set lineup? You discuss doing the former, but the testing you describe seems
to work off the latter. I'd be curious how much difference there is between the Nash Equilibrium optimum vs. the optimum against a set lineup.
Sean Forman says:
I'm a bit shocked to no longer be the only stathead to publish an article on TSP and Genetic Algorithms.
Brian Burke says:
Sean-That's the real deal!
Ian says:
Bravo! I bet your team's coach and all of the opposing parents are going to hate you.
Did you assign winners deterministically or probabilistically based on variation in times?
Other avenues you could explore are:
1. What is the home pool advantage?
2. Do certain kids perform better in different air/water temperatures?
3. Do kids perform better breathing every 2, 3, or 4 strokes?
I'll be looking forward to your followup in the next decade on the 20 person case.
Dan says:
How about this algorithm (which assumes that swim times are known exactly)?
Step 1: Start with the illegal lineup where every swimmer swims in every event.
Step 2: With the lineup that you're looking at, check if any swimmer scores points in all 4 events. If not, go to step 4. If yes, go to step 3.
Step 3: Take a swimmer who scores points in all 4 events, and create 4 alternate lineups, which remove them from each of the events. Send each lineup back to step 2.
Step 4: Any swimmer that is left participating in 4 events can be removed from any one of the events in which they do not score points to create a legal lineup (it doesn't matter how you
choose which of their non-scoring events to remove them from since the point totals will be the same).
You will end up with a set of possible lineups which covers all of the relevant variation and is much smaller than the set of all possible lineups. Essentially, you're taking the most
general (illegal) case, and breaking it into subcases based on what events a top swimmer is skipping, and then breaking each of those subcases into subcases, and so on, until there is no
relevant variation left (only options for the non-scoring swimmers). At most you'll end up with 4^k possible lineups, where k is the number of swimmers who could score points in all 4
events if they swam in all 4 (given that other swimmers can only swim in 3).
Then you need to optimize both team's lineups. That optimization process is not trivial (given the game theory issues), but it should be much simpler given that you only need to optimize
over a small subset of possible lineups.
(In practice, you may want to pretend that the 4th & 5th spots also score points, to account for the fact that swimming times are non-deterministic. e.g., It's much better to have a
swimmer in an event where they are expected to finish 4th than in one where they are expected to finish 14th, because of the chance that they will crack the top 3. Or you could account
for this using a rough hack in step 4, where you remove each swimmer from their worst event.)
Brian Burke says:
Dan-That's excellent. My own thought was to give 4th place a notional half point, or to weight all the expected 1st, 2nd, 3rd, 4th place points according to probabilistic averages.
Sean S says:
Which software did you use for the genetic algorithm?
Brian Burke says:
Sean-I wrote my own code. But there are several analytics applications and add-ins for MatLab or R that could do something like this. I wanted to do it myself just to learn it, and also,
if I wanted to actually employ it, I could customize it to automatically download and parse the relevant data for each meet.
Nate says:
This is more like knapsack than the traveling salesman. (Of course both are NP-hard.) Something to remember is that general optimizations are hard, but real world examples can often be
solved relatively easily.
Another heuristic that I would expect to be very powerful here is using a greedy strategy - get as many first places as possible, then look for 2nd places, and then 3rd places.
While this sort of thing is instructive as an exercise, it does seem to go in the face of the structure which is clearly set up so that lots of swimmers can get 'small wins'. (If it were
really competitive, teams would just sandbag.) So, maybe, instead of optimizing for total team points, it may make more sense to spread out the podium spots -- especially in the B and C
Anonymous says:
One doesn't have to examine every possibility for things like this... Treat it as depth-first tree searching using alpha-beta pruning and reasonable ordering of nodes will result in
significant reductions. Basically, you can discard huge portions of the tree where you know that you cannot beat a previously calculated score.
Anonymous says:
In the swim league of my youth, a swimmer could race in any age group at or above their current age, which could add an interesting extra part of the algorithm (if there is a strong
enough age group/swimmer). In one meet, an opponent had a very strong age group that projected out to taking 1st and 2nd in their own age group, but my team had a stronger 'next' age
group. Against my team, they decided to swim one of the kids up to the older group in hopes that they'd take two 1sts instead of first/second (5+5 versus 5+3). Had the strategy worked,
they would have won the meet, but, instead, they 'just' got the same score as if they didn't swim him up.
Brian Burke says:
Miles-They can do that in this league too. But it's usually only done in the relay portion, which does add another fun optimization wrinkle. Sometimes a team can get a 3rd place (out of
3) in the relays just by being able to field an additional 4-swimmer lane.
Graeme says:
I've been reading your blog for some time, but this is the first time I have commented. As the son of an academic with a specialism in GAs/heuristics, I was pleasantly surprised to find
mention of them on this site. Keep up the good work, I'm always interested to see real-world applications of his field.
Michael C says:
Great stuff. This could almost be the wikipedia article explaining how genetic algorithm problem solving works in practice. Thanks.
Unknown says:
Do your accounting homework, accounting assignments with our accounting homework solution and accounting tutoring help. Our homework assistance helps you to do your accounting homework
and accounting assignments.
Please visit once www.homeworkhelpindia.com or email me on prashant8778@yahoo.co.in
Unknown says:
Do your accounting homework, accounting assignments with our accounting homework solution and accounting tutoring help. Our homework assistance helps you to do your accounting homework
and accounting assignments.
Please visit once www.homeworkhelpindia.com or email me on prashant8778@yahoo.co.in
Unknown says:
Do your accounting homework, accounting assignments with our accounting homework solution and accounting tutoring help. Our homework assistance helps you to do your accounting homework
and accounting assignments.
Please visit once www.homeworkhelpindia.com or email me on prashant8778@yahoo.co.in
Unknown says:
Do your accounting homework, accounting assignments with our accounting homework solution and accounting tutoring help. Our homework assistance helps you to do your accounting homework
and accounting assignments.
Please visit once www.homeworkhelpindia.com or email me on prashant8778@yahoo.co.in
Unknown says:
Do your accounting homework, accounting assignments with our accounting homework solution and accounting tutoring help. Our homework assistance helps you to do your accounting homework
and accounting assignments.
Please visit once www.homeworkhelpindia.com or email me on prashant8778@yahoo.co.in
Unknown says:
Do your accounting homework, accounting assignments with our accounting homework solution and accounting tutoring help. Our homework assistance helps you to do your accounting homework
and accounting assignments.
Please visit once www.homeworkhelpindia.com or email me on prashant8778@yahoo.co.in
Unknown says:
Do your accounting homework, accounting assignments with our accounting homework solution and accounting tutoring help. Our homework assistance helps you to do your accounting homework
and accounting assignments.
Please visit once www.homeworkhelpindia.com or email me on prashant8778@yahoo.co.in
Unknown says:
Do your accounting homework, accounting assignments with our accounting homework solution and accounting tutoring help. Our homework assistance helps you to do your accounting homework
and accounting assignments.
Please visit once www.homeworkhelpindia.com or email me on prashant8778@yahoo.co.in
Unknown says:
Do your accounting homework, accounting assignments with our accounting homework solution and accounting tutoring help. Our homework assistance helps you to do your accounting homework
and a
ccounting assignments.
Please visit once www.homeworkhelpindia.com or email me on prashant8778@yahoo.co.in
|
{"url":"https://www.advancedfootballanalytics.com/2013/07/optimizing-swim-meet-traveling-salesmen.html?showComment=1375317222429","timestamp":"2024-11-03T06:19:29Z","content_type":"application/xhtml+xml","content_length":"139778","record_id":"<urn:uuid:ef3a0856-a913-4dbf-9e65-7678096e2d4d>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00755.warc.gz"}
|
Rings of Continuous Functions on Spaces of Finite Rank and the SV Property
Document Type
Article - On Campus Only
Let X be a compact topological space and let C(X) denote the f-ring of all continuous real-valued functions defined on X. A point x in X is said to have rank n if, in C(X), there are n minimal prime
ℓ-ideals contained in the maximal ℓ-ideal M x = {f ∊ C(X):f(x) = 0}. The space X has finite rank if there is an n ∊ N such that every point x ∊ X has rank at most n. We call X an SV space (for
survaluation space) if C(X)/P is a valuation domain for each minimal prime ideal P of C(X). Every compact SV space has finite rank. For a bounded continuous function h defined on a cozeroset U of X,
we say there is an h-rift at the point z if h cannot be extended continuously to U ∪ {z}. We use sets of points with h-rift to investigate spaces of finite rank and SV spaces. We show that the set of
points with h-rift is a subset of the set of points of rank greater than 1 and that whether or not a compact space of finite rank is SV depends on a characteristic of the closure of the set of points
with h-rift for each such h. If X has finite rank and the set of points with h-rift is an F-space for each h, then X is an SV space. Moreover, if every x ∊ X has rank at most 2, then X is an SV space
if and only if for each h, the set of points with h-rift is an F-space.
Original Publication Citation
Suzanne Larson (2007) Rings of Continuous Functions on Spaces of Finite Rank and the SV Property, Communications in Algebra, 35:8, 2611-2627, DOI: 10.1080/00927870701327880
Digital Commons @ LMU & LLS Citation
Larson, Suzanne, "Rings of Continuous Functions on Spaces of Finite Rank and the SV Property" (2007). Mathematics, Statistics and Data Science Faculty Works. 160.
|
{"url":"https://digitalcommons.lmu.edu/math_fac/160/","timestamp":"2024-11-02T23:41:24Z","content_type":"text/html","content_length":"36684","record_id":"<urn:uuid:614a2a18-a0aa-4914-91fb-e790c90c8bd9>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00533.warc.gz"}
|
Philosophy of Real Mathematics
Klein 2-Geometry IV
Can we sustain our momentum for the categorification of the Erlangen Program into its fourth month? At least now it is clear that what we need is a good account of how to quotient a 2-group by one
its sub-2-groups. I've been messing around a little with some baby 2-groups and think I see how they work. I now think that the categorified Euclidean geometry that cropped up
on, i.e., the one that spoke of weak points and weak lines, arises from a discrete categorification of the Euclidean group. This has Euclidean transformations as 1-morphisms, and only trivial
2-morphisms. We may expect the geometry from more general 2-groups to look quite different.
: Things are hotting up. For the first time in my life (to my face at least) I've been called 'evil'. What can be achieved before the hiatus of a sojourn in France?
60 Comments:
david said...
From Ars Mathematica we read that John Iskra is writing a thoroughly category theoretic treatment of algebra - (Really) Modern Algebra. The title presumably refers to van der Waerden's Modern
Algebra. Chapter 5 deals with groups, and might be a good basis to develop a systematic 2-group theory. I am a little surprised that more hasn't been done, although it could be buried in some
text on bicategories.
Are there equivalents of the 3 isomorphism theorems? What is the proper definition of an abelian/soluble/nilpotent/simple 2-group? Is there a Jordan-Holder type theorem? Can 2-groups be defined
in terms of generators and relations? The questions are endless.
Hi! Sorry to disappear on you: I've been hunkering down to get some work done. I just finished editing the galley proofs of my paper Quantum Quandaries, which will appear in Structural
Foundations of Quantum Gravity by Steven French, Dean Rickles and Juha Saatsi. And, I finished polishing up the lecture notes taken by Mike Shulman, based on my talks on n-categories and
cohomology at Chicago.
In my spare time, I've been working out a theory of "projective 2-geometry", based on the theory of 2-vector spaces that Alissa Crans and I came up with. I should tell you about it, because it
seems like a natural example of this Klein 2-geometry business. It seems to work fine, but I still can't tell how interesting it is!
Please tell me sometime why you think the bits of Euclidean 2-geometry we came up with are associated to the discrete 2-group on the Euclidean group. I thought about it for a while and I think
you're right, but I'd like to hear your reasoning.
You raise lots of questions:
Are there equivalents of the 3 isomorphism theorems? What is the proper definition of an abelian/soluble/nilpotent/simple 2-group? Is there a Jordan-Holder type theorem? Can 2-groups be defined
in terms of generators and relations? The questions are endless.
I think a lot of these could be answered with existing knowledge about 2-categories and bicategories, withooo much struggle. But, for a lot of them, it's probably better to wait until
1) we generalize to infinity-groups
2) we have specific examples that make us need the answers to these questions.
For example, my work with Alissa on Lie 2-algebras was justified (in my opinion) by our discovery of a canonical 1-parameter family of Lie 2-algebras deforming any simple Lie algebra. For more
general questions it would probably have been better to move right on to Lie infinity-algebras and think of them as Stasheff's L-infinity algebras, since he already has a substantIal technology
You may think it's odd to put off certain questions until we answer them for infinity-groups, but remember, most people call infinity-groups topological groups or loop spaces, so they're already
well studied; the only "new" thing is to study questions where everything is done "up to homotopy" - and even this isn't very new, since topologists have been doing that for loop spaces since the
late 1960s. From this viewpoint, 2-groups are topological groups with only pi_0 and pi_1 nontrivial.
I'll answer a couple of easy questions, though!
Abelian 2-groups: looking at the periodic table of n-categories, one sees that one has 3 levels of abelianness in this column: 2-groups, braided 2-groups and symmetric 2-groups. All of them can
be classified (in principle) using group cohomology, as I explain in my paper. The first people to do this explicitly were Joyal and Street, in their unpublished paper on braided monoidal
categories, which is now available online, thank god!
Generators and relations: yes, 2-groups and indeed every sort of "purely algebraic" structure can be defined by generators and relations; this is part of the enormous generalization of universal
algebra initiated by Lawvere.
I've been working out a theory of "projective 2-geometry", based on the theory of 2-vector spaces that Alissa Crans and I came up with.
It would be great to hear about this, and perhaps why projective geometry works here, and what would be different from other versions of 2-vector spaces, such as the one in Elgueta's paper -
Generalized 2-vector spaces and general linear 2-groups, which discusses a GL(Vect[C]).
As for why I think our original categorified Euclidean geometry is based on the discrete categorification of the Euclidean group, I think it helps to consider what a quotient of 2-groups should
look like.
Let's denote the 2-group we have been considering as G-H. Speaking loosely, this has G objects and G x H arrows. Consider a sub-2-group M-N. The M acts on G to give G/M objects (really G//M).
Quotienting the arrows gives (G x H)/(M x N), so very loosely, H/N arrows emerging from each object (component).
Remember we called the 2d Euclidean group E, the stabilizer of a point P, and the translations T. The quotient E-1/P-1 has T or R^2 objects (each really a copy of P), and a single arrow out of
any of these points, i.e., the identity arrow. That seems to be the plane of weak points we were considering. I think weak lines only go through weak points whose internal dials are pointing in
their direction (something like a gauge theory with no turning of pointer along lines). There might be a richer geometry where these pointers can spin along a line.
If M-N is a normal sub-2-group, the quotient ought to be a 2-group too. We should expect that P-T is the quotient E-T/T-1. Your Poincare 2-group is a similar quotient.
David wrote:
As for why I think our original categorified Euclidean geometry is based on the discrete categorification of the Euclidean group, I think it helps to consider what a quotient of 2-groups should
look like.
We certainly need to understand that if we're going to get anywhere with categorified Klein geometry.
Let's denote the 2-group we have been considering as G-H.
Yup. "Experts" write this 2-group as H->G, since since any strict 2-group gives a crossed module with a target map
t: H -> G.
There's also an action of G on H, and that's the only structure that the notation H->G conceals.
So, just for fun, I'll change your notation G-H to G<-H, and see if it looks nice or silly.
Speaking loosely, this [2-group] has G objects and G x H arrows.
Yes; really the group of arrows is the semidirect product of G and H, since G acts on H - but apart from that you're not being "loose" at all here.
Consider a sub-2-group M<-N. The M acts on G to give G/M objects (really G//M). Quotienting the arrows gives (G x H)/(M x N), so very loosely, H/N arrows emerging from each object (component).
Let me make this a bit more precise, since I'm worried about cutting too many corners here. If we don't understand quotients of 2-groups, we can't do Klein 2-geometry.
Let's first suppose we have the 2-group M<-N acting on any category C. Let's figure out what the quotient
should be. If we then restrict to the case where C is a 2-group, we'll be sure to get the right answer.
(After all, one level down, the quotient of a group by a subgroup, viewed as a mere set, is nothing but a special case of the quotient of a set by a group.)
For the moment, just to spare both of us some pain, I'll only consider the case where M<-N acts strictly on C: in other words, the usual laws for an action
c1 = 1
(cm)m' = c(mm')
hold as equations instead of up to isomorphism. This will be the case when we study a subgroup of a strict 2-group acting on this group by right multiplication - and we've only been talking about
strict 2-groups lately.
So, how do we form the quotient
C//(M<-N) ?
We must use a weak quotient or we'll get in trouble somewhere down the line. So, we start with C and instead of quotienting by making some objects equal, we do so by throwing in new isomorphisms,
which satisfy some equations called "coherence laws".
For each object c of C and m in M, we throw in an isomorphism
c -(c,m)-> cm
We impose equations saying these are natural in c and m. In other words, we make certain obvious squares commute whenever we have a morphism
c -f-> c'
or a morphism
m -f-> m'
(Test to see if you follow: which square can actually be drawn as a commutative triangle?)
We also impose some coherence laws related to multiplication and units in our 2-group. If we have m and m' in M, we can form
c -(c,m)-> cm -(c,m')-> (cm)m'
and also
c -(c,mm') -> c(mm')
These should be equal, or we'll be throwing in many different isomorphisms between things we're trying to "identify", instead of just one.
Similarly, we want
c -(c,1) -> c1 = c
to be the identity morphism.
(Test: show this is actually redundant.)
I think that's all. Quotienting by a weak 2-group action is not much harder; the only thing we need to remember is that instead of having equations (cm)m' = c(mm') and c1 = 1, we have
isomorphisms, so we need to include those isomorphisms in the last two coherence laws I mentioned.
In my next post - I think I'll pause here so readers don't go insane with boredom - I'll see what we get when we mod out the "discrete Euclidean 2-group" by the "discrete rotation sub-2-group".
Okay, now let's study categorified Euclidean n-space: the weak quotient
Here E is the Euclidean group in n dimensions, and we're making it into the discrete Euclidean 2-group E<-1 by turning it into a category with only identity morphisms - that's what the "<-1"
Similarly, O(n) is the subgroup of the Euclidean group that stabilizes a point - better known as the rotation-reflection group - and O(n)<-1 is the discrete rotation-reflection 2-group.
Lastly, the "weak quotient" operation // was described in my previous post. (Nobody who writes in blogs talks like this!)
Putting them together, we see first of all that
has E(n) as its objects. What about the morphisms? We start out with just identity morphisms in E(n)<-1, but then we throw in new morphisms when we take the weak quotient. We thrown in an
e -(e,o)-> eo
for any e in E(n) and any o in O(n). The naturality squares I mentioned last time are all trivial, so the only coherence law we need is that
e -(e,o)-> eo -(eo,o')-> eoo'
e -(e,oo')-> eoo'
What do we get?
This is just the weak quotient
E(n)//O(n) !!!
So, David was right.
We can draw a couple of morals from this. One is that whenever we have a group G and a subgroup H, we have
G<-1//H<-1 = G//H
so our weak quotient operation on 2-groups is backwards-compatible with our earlier defined weak quotients of groups.
But, we didn't use anything about G<-1 being a 2-group; it could have been any discrete category. Say G is a group acting on a set X and let Disc(X) be the corresponding discrete category with X
as objects and only identity morphisms. Then we've really shown this:
Disc(X)//G<-1 = X//G
or if you prefer
Disc(X)//Disc(G) = X//G
This stuff is important because it shows how Klein 2-geometry - the study of geometries with 2-groups rather than groups of symmetries - is related to our "automatic categorification" process
that turns a homogeneous space G/H into a category G//H. We now see this category is indeed an example of a Klein 2-geometry, with the discrete 2-group Disc(G) as its symmetries.
So - good idea, David!
I think now we're ready for some Klein 2-geometries where the symmetry 2-group is a bit more interesting - not just a group in wolves' clothing. So, I should tell you about projective 2-geometry.
Okay, here's a taste of projective 2-geometry. I won't go into much detail now, since I've written too much on your blog already today. I'll just sketch a bit of what I did while sitting around
at various Starbuckses - annoying plural, that! - around Shanghai.
(I only go to Starbuckses - Starbucki??? - when Lisa is shopping in tourist areas. That's about the only time I see white folks these days. For just 8 times the price of a bottle of ice tea at a
typical Chinese store, you can get a steaming hot cup of coffee at Starbucks, perfect for a Shanghai summer. But, the caffeine blast is enough to make me dream up all sorts of crazy cool math, so
the sweat is worth it.)
Before we categorify projective spaces, we should probably try to categorify vector spaces. I know three main ways to do this (and lots of small variations):
1) Kapranov-Voevodsky 2-vector spaces. These are categories equivalent to Vect^n, where Vect is the category of finite-dimensional vector spaces. They're very useful in topological quantum field
theory, but they have no "negatives" of objects, much less the ability to multiply an object by an arbitrary complex number, so they're not an easy context for generalizing most of ordinary
linear algebra.
2) Baez-Crans 2-vector spaces. These are categories with a vector space of objects, a vector space of morphisms, and with all the usual category operations (e.g. composition) being linear. Here
you can take an object and multiply it by -1.73 + 42 i. There's a nice theory of categorified Lie algebras in this context, called "Lie 2-algebras", and these have corresponding "Lie 2-groups",
3) Elgueta 2-vector spaces. These are free additive complex-linear categories on ordinary categories. In other words, to form one, we start with a category and formally throw in direct sums of
objects and complex linear combinations of morphisms.
One great thing about these is that you can form the "2-group 2-algebra" of a 2-group, much like the "group algebra" of a group.
Anyway, for my purposes I want to use 2) - not just because I helped invent this kind of 2-vector space, but because these are the most closely linked to Lie groups and other ideas from
differential geometry. We're doing "Klein 2-geometry", and I'd like it to be related to geometry that we know and love!
Given a (complex) vector space V, the group C* of invertible complex numbers acts on V by scalar multiplication, and we can form the projective space PV like this:
PV = (V - {0})/C*
Given a 2-vector spaces V, the discrete 2-group Disc(C*) acts strictly on V by scalar multiplication, and we can form the projective 2-space PV like this:
PV = (V - {0})//Disc(C*)
Here {0} means something funny: it's the connected component of the object 0 in V. You can't just rip out an object from a category without pulling out all the morphisms from and to it! And, it
would be insane not to pull out all isomorphic objects, too. All morphisms in a 2-vector space are isomorphisms, so when we rip out 0, all objects isomorphic to it, and all isomorphisms between
them, we are removing the connected component of 0: the category consisting of all objects with morphisms from or to 0, and all morphisms between them.
I can describe what PV looks like very explicitly.
Up to equivalence, 2-vector spaces are classified by two natural numbers - the "first and second Betti numbers". (In general, n-vector spaces are classified by n numbers. When n = 1 this one
number is called the "dimension".) It would be fun to see even more explicitly what PV looks like as a function of these two Betti numbers!
When the second Betti number is zero, our 2-vector space V is secretly just a vector space, and our projective 2-space PV is just an ordinary projective space.
All this is very nice: an orderly setup subsuming ordinary projective geometry as a special case.
I also suspect that these projective 2-spaces are "smooth 2-spaces" in the sense of Toby Bartels. These are the kind of category where you can do differential geometry: they're like manifolds in
the world of categories.
For the purposes of Klein geometry, what matters more is that any 2-vector space V has a strict 2-group GL(V) acting on it, and this action commutes with scalar multiplication so we get GL(V)
acting on PV. Presumably we get something called "PGL(V)" acting on PV, just as in ordinary projective geometry, but I haven't thought about that yet.
Also nice is that whenever we have a nontrivial "2-vector subspace" W in V, we get an inclusion
PW -> PV
These give "figures" like projective points, lines, etc. in our projective 2-space. But now the types of figures are indexed not just by dimension, but by two Betti numbers!
And, GL(V) will act on the set of figures of a given type.
So, this seems worth looking into a bit. If you ask me questions about stuff, that would give me an excuse to explain it in more detail.
To me, it seems to be useful to make explicit that all three examples of 2-vector spaces that John mentions in the above comment are special cases of a single concept.
To me, a 2-vector space is
- a monoidal category C, playing the role of the ground ring/ground field
- a module category V over C, playing the role of a vector space over C .
For fixed C, these 2-vector spaces live in the obvious 2-category
whose objects are C-module categories, whose morphisms are C-linear functors and whose 2-morphisms are natural trafos between these.
For example, take C = Disc(K), for K some field and Disc(K) the discrete category with elements of this field as its objects. Then
is the 2-category of Baez-Crans 2-vector spaces over K, i.e. of categories internal to Vect_K.
On the other hand, for C = Vect_K we get a type of 2-vector spaces that live in
or, if we agree on writing
Vect_K = Mod_K
(which should remind us of closely related similar recursive structures, like for instance
Tor_Tor_U(1) ).
Mod_Mod_K is pretty large. A nice tractable sub-2-category of Mod_Mod_K is
the sub-2-category of algebras, bimodules and bimodle homomorphisms internal to Vect_K.
(In order to regard this as a sub-2-category of Mod_Vect_K we need to send every algebra to the category of modules of that algebra and every bimodule to the functor obtained by tensoring with
this bimodule.)
Now, Kapranov-Voevodsky 2-vector spaces are again a tiny sub-2-category of BiMod(Vect_K), namely that where all algebras involved are of the form K^n, for natural numbers n.
Elgueata's 2-vector spaces still live in a sub-2-category of Mod_Mod_K, somewhere in between the full thing and KV-2-vector spaces (including theses as a special case).
To me it seems very useful to keep this big picture in mind.
For every 2-vector space in the sense of a module category V over some monoidal category C, we can form the corresponding _projective_ 2-vector space, I think.
Inside C, we find the Picard 2-group C* of C.
If we regard C as a 2-category with a single object, then I guess we can neatly define C* as the core of C. It contains all objects which have weak inverses and all isomorphisms between these.
For the special case of Baez-Crans 2-vector spaces we have
C = Disc(K)
C* = Disc(K*) .
On the other hand, for the case C = Vect_K we find
C* = 1DVect_K,
the category of 1-dimensional vector spaces.
In applications to 2DQFT, we often have C being some category of representations of some quantum group or vertex algebra, and C* in this case is known as the category of "simple currents".
A "zero 2-vector" in a 2-vector space should be an object which is fixed by the action of C*, up to isomorphism.
So I think we can generalize the construction of projective 2-vector spaces indicated by John to the most general 2-vector spaces living in Mod_C, for any C.
We remove all zero 2-vecors (all fixed points of the Picard 2-group C* of C) and then divide out by the action of that Picard 2-group C*.
I think I'll pause here so readers don't go insane with boredom
Is this aimed at the youth with their reduced attention spans? Perhaps we could get some background music going to liven it up. Did you read Fred Caligeri's comment in his review of a book on
modular forms:
"With today’s iPod generation more likely to study elliptic curves and modular forms before learning any class field theory, Shimura’s book by itself is no longer apposite as an introduction to
modular forms."
Urs' generalization sounds very interesting, but limiting ourselves to John's 2-vector spaces, for concreteness sake
Up to equivalence, 2-vector spaces are classified by two natural numbers - the "first and second Betti numbers".
So, the first measures the dimension of the space of the objects, and the second the dimension of the space of morphisms with source 0, as in the 2-Term picture? Then if n_1 <= n_2, 0 will be
connected to every object, so V - {0} is empty.
So, is it like you're doing projective geometry in C^n_1 - C^n_2 worth of objects, arranged in C^(n_1 - n_2) - {0} worth of components?
Something still niggling me is what the result of a 2-group acting on a category should be, or more generally what an n-group acting on a (n - 1)-category should be.
In the case n = 1, i.e., a group acting on a set, we either get a set S/G, or better a category S//G. Now for the case n = 2, you construct for us a category, by adding arrows and enforcing some
equations. But why should we not expect a 2-category here? Presumably the answer to my question for general n according to your construction is "an (n-1)-category", but then why do we prefer the
weak quotient in the case n = 1?
As long as nobody stops me I'll keep throwing in comments here, ok?
David wrote:
"Urs' generalization sounds very interesting, but limiting ourselves to John's 2-vector spaces, for concreteness sake"
Sure. I would just like to keep in mind at which point we _have_to_ invoke special assumptions. One of my points was that the mere definition of a projective 2-vector space does not need the
assumption that we have specialized to C = Mod_Disc(K).
David wrote:
"Then if n_1 <= n_2, 0 will be connected to every object"
Let the Baez-Crans 2-vector space be given by the 2-term complex of vector spaces
V_1 --d--> V_0 .
Then the zero vector object 0 in V_0 is connected to every other object in V_0 if and only if d is onto.
So a necessary condition for the isomorphism class of 0 to be V_0 is that dim(V_1) >= dim(V_0). But this is not sufficient. The kernel of d might be all of V_1, for instance, in which case our
2-vector space is skeletal.
David wrote:
"So, is it like you're doing projective geometry in C^n_1 - C^n_2 worth of objects, arranged in C^(n_1 - n_2) - {0} worth of components?"
I guess it is like passing to the skeleton. We pass from V_0 to coker(d).
But I might be wrong. I realize I need to think about what precisely we want to mean by writing V-{0}.
David wrote:
"Something still niggling me is what the result of a 2-group acting on a category should be, or more generally what an n-group acting on a (n - 1)-category should be."
A couple of entries before I argued that the _systematic_ way to derive this is by categorifying the concept of action groupoids by more or less straightforward internalization.
What's wrong with that?
I am pretty interested in the answer to that, albeit possibly for different reason than you are. I know what it means for a 2-bundle with connection to be equivariant under the action of an
ordinary group. Strangely enough, it is not obvious at all (to me at least) what it would mean for it to be equivariant under the action of an honest 2-group. Apart from the problem of figuring
this out by itself, I have no real clue in which applications a 2-equivariant 2-gerbe would arise naturally.
Sorry, I keep going off that gerbe tangent, while you want to have a discussion on Klein's 2-program.
Maybe I should make more explicit how I think that action 2-groupoid should look like, and how that reproduces what John wrote above (as far as I can see).
I originally pointed out that the ordinary action groupoid of G acting on C has space of objects equation to C and space of morphisms equal to GxC, with the obvious source, target and composition
This straightforwardly internalizes in Cat, where now G is a 2-group, and C is some category with a G-action on it.
The result is a _double_ groupoid. And it really is a double groupoind non-equivalent to a 2-groupoid (in general) simply because horizontal and vertical morphisms will live in different
If we assume everything in sight to be strict, we can easily draw the 2-cells of this action double-groupoid.
The category of vertical morphism is C.
The category of horizontal morphisms is C_0//G_0, i.e. the ordinary action groupoid at the level of objects.
is a morphism in C and
a morphism in G, then a 2-cell of our action double groupoid looks like
c1 --g1--> g1 . c1
f ********h . f
c2 --g2--> g2 . c2
Note that this is not supposed to be a commuting square (which would not make sense), but is a 2-cell which we should address as h.f (h acts on f).
Vertical composition in this 2-group is just ordinary joint composition in C and G. Horizontal composition is given by the product in G.
If both C and G are discrete categories, this does indeed reproduce the ordinary action groupoid of G on C.
The quotient we are after
should be the 1-category obtained by quotienting out _horizontal_ 2-isomorphisms.
Now, what does that tell us about quotient 2-groups?
As I said in the comment section here (I cannot link to a specific comment here, can I?) we find the quotient of a group H by a subgroup G by forming the action groupoid of G acting on H and
checking if this groupoid has the structure of a 2-group. If so, we form this 2-group and find that it is equivalent to a discrete 2-group Disc(H/G), which identifies the quotient group H/G that
we are looking for.
I am guessing that the same procedure should apply here. Let C be a 2-group being acted on by the 2-group G. We form the action double groupoid as I indicated above.
Under special conditions (which we would address as saying that G is a _normal_ sub-2-group of C) this double groupoid should have the structure of a 3-group!
In other words, there should then be a way to take two of the 2-cells that I have drawn last time and put them on top of each other (in the third dimension) to produce a new 2-cell, such that
this operation is double functorial and has inverses.
If this is the case, we should find a 2-group such that the resulting 3-group is equivalent to that 2-group, regarded as a 3-group with only identity 3-morphisms. This 2-group is the quotient
group we are after.
While conceptually straightforward, I'd need pencil and paper to work out a nontrivial example, if any. Right now I have none with me.
However, the trivial examples, where both G and C are discrete, are easily seen to reproduce the expected result.
I wrote:
"We pass from V_0 to coker(d)."
Sorry, nonsense. We just want to remove im(d) and pass from V_0 to V_0-im(d).
John wrote:
I think I'll pause here so readers don't go insane with boredom.
David wrote:
Is this aimed at the youth with their reduced attention spans?
Well, I don't think people read blogs expecting a single entry to be a long disquisition, especially on a technical matter like Klein 2-geometry. Part of the problem is this damned "skinny
column" format, which makes everything twice as long as it would otherwise be.
John wrote:
Up to equivalence, 2-vector spaces are classified by two natural numbers - the "first and second Betti numbers".
David wrote:
So, the first measures the dimension of the space of the objects, and the second the dimension of the space of morphisms with source 0, as in the 2-Term picture?
Not quite - I'm really talking about Betti numbers: dimensions of the homology groups of a chain complex.
Let me explain.
We've already seen how a 2-group can be written as a crossed module: a group G, a group H, a homomorphism
t: H -> G
together with an action of G on H, satisfying two axioms: equivariance and the Peiffer identity.
Just as a vector space is a special sort of group, a (Baez-Crans) 2-vector space is a special sort of 2-group. Viewed as a crossed module, it's simply one where G and H are vector spaces, and the
action of G on H is trivial.
In this particular case, equivariance and the Peiffer identity are automatic, so we can forget about them.
So, a 2-vector space boils down to a couple of vector spaces G and H and a linear map
t: H -> G
This otherwise known as an operator, or, if we wish to show off, a 2-term chain complex of vector spaces. The latter terminology would be ridiculous overkill, were it not a special case of a more
general fact: n-vector spaces are secretly just n-term chain complexes of vector spaces!
Now, chain complexes have homology groups, and the dimensions of the nth homology group is called the nth Betti number of our chain complex. It's a nice fact that, just as finite-dimensional
vector spaces are classified up to isomorphism by their dimension, chain complexes of finite-dimensional vector spaces are classified up to equivalence by their Betti numbers.
In particular, 2-vector spaces are classified up to equivalence by their Betti numbers. Concretely, for the 2-vector space
d: V_1 -> V_0
the 0th Betti number is
dim (coker d)
while the 1st Betti number is
dim (ker d)
Here the kernel of d, "ker d", consists of V_1 that get sent to 0 by d, while the the cokernel of d, "coker d", is V_0 modulo the image of d.
For example, look at this 2-vector space:
1: C -> C
where C is the complex numbers. The dimensions of V_0 and V_1 are both 1, but the 0th and 1st Betti numbers are both 0! So, this 2-vector space should be equivalent to the trivial 2-vector space,
1: {0} -> {0}
And, it's easy to see this if we think of 2-vector spaces as categories (just as we do for 2-groups). The category corresponding to
1: C -> C
has C's worth of objects, and a single morphism from any object to any other. So, this category is equivalent to the category whose only object is 0, and whose only morphism is the identity. And
this little category corresponds to
1: {0} -> {0}
In short: adding extra objects, but also extra morphisms saying these objects are isomorphic, gives us a new "puffed-up" version of a 2-vector space which is equivalent to the one we started
with... and its Betti numbers don't change.
David wrote:
Then if n_1 <= n_2, 0 will be connected to every object, so V - {0} is empty.
Well, your guess about the definition of these numbers was a bit wrong, but your intuition is right, so we can fix what you're saying here.
The 0th Betti number of our 2-vector space is the dimension of the space of isomorphism classes of objects. (The 1st Better number is the dimension of the automorphism group of any object!) So,
if the 0th Betti number is zero, all objects are isomorphic to 0, and what I'm calling
V - {0}
will be empty.
This generalizes the fact that if V is a plain old vector space,
V - {0}
will be empty if its dimension is zero. Vector spaces give special 2-vector spaces, and their "dimension" then becomes the "0th Betti number". Nice, huh?
Moral: the projective 2-space PV is empty if the 0th Betti number of V vanishes.
Hi, Urs! Believe it or not, I'm writing a This Week's Finds about your ideas on the M-theory 3-group, in order to procrastinate from finishing my paper with Aaron before finishing my paper with
you. But, I decided to procrastinate a bit on writing This Week's Finds, so I posted another article on David's blog here - and ran into you!
I'm procrastinating a lot these days, but no matter what I do, it always seems to involve higher categories, so I figure it's not so bad.
You write:
To me, a 2-vector space is
- a monoidal category C, playing the role of the ground ring/ground field
- a module category V over C, playing the role of a vector space over C .
These are great concepts, but it's probably good not to speak of "2-vector spaces" in such generality, since category theorists already have a perfect term for this concept: an action of a
monoidal category on a category. This categorifies the usual concept of an "action" of a monoid on a set.
When our monoidal category is equipped with an addition as well as a multiplication, it's called a "rig category" - or often, a bit incorrectly, a "ring category". These like to act on categories
equipped with their own addition, which we then could call modules of our rig category.
When our rig category acts a bit more like a field, perhaps then its modules deserve to be called 2-vector spaces. But, I don't know anyone who has axiomatized a concept of "field category".
ANYWAY, despite this terminological nitpicking, I really like your idea. At first I didn't believe that the Baez-Crans 2-vector spaces were precisely the same as modules of the rig category Disc
(K) (where K is some field), but now it's looking like you're right. Did you check this carefully?
Hmm, maybe it's obvious that "working over Disc(K)" is the same as "working internal to Vect_K." Some kind of general abstract nonsense seems to be at work here.
The Kapranov-Voevodksy 2-vector spaces are indeed modules of Vect_K, but as you seem to point out, such modules need to be especially nice to qualify as Kapranov-Voevodky 2-vector spaces: they're
basically the (finitely generate) free modules, of the form
Anyway, I hope you forgive me if I pay less attention to your "big picture" ideas than I should - I really want to keep zooming in on examples of Klein 2-geometries. I want to get to the point
where I clearly see how something like projective geometry fits into a bigger categorified picture. So, at least on this blog, I'm going to sink my teeth into this problem with the tenacity of a
bulldog, and not let go, no matter what tasty morsels you dangle in front of me.
I'm procrastinating a lot these days
And I'm now procrastinating here so as not to write my talk about history of maths for a workshop seeking a new epistemology of maths. And the case I was going to consider is the history you are
delaying writing with Aaron because of your blogging here, in particular whether you are just writing a Royal-road-to-me account, and whether, if this is so, a sentiment of this accounts for this
piece of modesty:
"As we approach the present we discuss the work of less famous scientists, stopping shortly after we mention the authors of the current paper, when it becomes impossible to sink any lower."
But back to some proper procrastinating, might it be that although 2-vector spaces are classified up to equivalence by 2 natural numbers, that it makes a difference to their 2-vector subspaces?
1: C -> C
1: {0} -> {0}
are equivalent, but the former has a nontrivial sub 2-space.
What would the passage from projective to affine geometry with your 2-vector spaces look like? We might try the (3,0) 2-vector space, form {V - {0}}//R*, and quotient the 2-group of projective
transformations by those which preserve (0,0,1).
Urs said:
As long as nobody stops me I'll keep throwing in comments here, ok?
I cannot link to a specific comment here, can I?
No, it's way too primitive here for that.
John Baez wrote:
"Believe it or not, I'm writing a This Week's Finds about your ideas on the M-theory 3-group,"
Cool. It's a topic that people secretly know a lot about, without knowning that they know it - since they don't know the higher category theoretic picture that it fits in. So it certainly is a
topic that can benefit from a TWF.
One big question that I long to figure out the answer to is:
Do we need "curvators" to properly understand (super)gravity as a theory of an n-connection, or does it suffice to enlarge the structure n-algebra by auxiliary fields and let the fake flatness
conditions take care of everything, along the lines that I pointed out here.
I am hoping it is the latter, but I don't know yet.
John wrote:
"[...] before finishing my paper with you [...]"
For a long time I was worried that I would never be able to reformulate the proofs in a way that you would consider "finished". But now I know what we need.
My vacation ends today. I will start typing the new, better way to formulate all this starting tomorrow.
John wrote:
"At first I didn't believe that the Baez-Crans 2-vector spaces were precisely the same as modules of the rig category Disc(K) (where K is some field), but now it's looking like you're right. Did
you check this carefully?"
I first saw this stated in one of Elgueta's papers, unless I dreamed it. I did check it mentally, without ever trying to write things down cleanly. So, no, I did not check it real carefully.
But it should essentially be trivial. Being a Disk(K)-module category makes both object as well as morphisms K-modules, hence K-vector spaces. Composition must respect the Disc(K)-action by
functoriality, hence must be K-linear. Similar arguments make source and target maps K-linear. I think.
John wrote:
"Anyway, I hope you forgive me if I pay less attention to your "big picture" ideas than I should - "
Sure. You two go ahead and think about 2-Klein. I will now and then check which of your constructions require assuming restriction to Mod_Disk(K), and which don't.
In the end, somebody should sit down and try to see how the big theorems of linear algebra, i.e. the spectral theorem mostly, lift to Mod_C. I made some comments on that here. There are several
indications that this is very important for 2D QFT.
John wrote:
"such modules need to be especially nice to qualify as Kapranov-Voevodky 2-vector spaces:"
Yes, they need to be categories of K^n-modules, instead of modules for arbitrary algebras, i.e. they live in BiMod_K^n.
I think what we should really be looking at is all of BiMod (regarded as a sub-2-category of Mod_Vect they way I indicated).
Why? For instance because every strict 2-group H-->G together with any rep of H has a canonical faithful 2-rep (->) in BiMod induced by that rep of H, which, for H-->G the string 2-group,
essentially reproduces the Stolz-Teichner rep.
Similarly, the corresponding 2-rep of shifted U(1) coming from the canonical 1D rep of U(1) is the right one to find abelian gerbes as associated line 2-bundles. I am in the process of writing
that out.
Sorry, I know this thread here is about Klein's 2-program. :-)
Please pardon the somewhat "disciplinary" tone of the following entry. I'm afraid David is being a bit... well, evil.
But back to some proper procrastinating, might it be that although 2-vector spaces are classified up to equivalence by 2 natural numbers, that it makes a difference to their 2-vector subspaces?
1: C -> C
1: {0} -> {0}
are equivalent, but the former has a nontrivial sub 2-space.
Right. But, we've run into the same puzzle before, back in July, when we noticed that the 2-group TRIV(G) was equivalent to the trivial 2-group but had nontrivial sub-2-groups. I wrote:
The really interesting puzzle is how a boring 2-group can seem interesting: for example, how a weakly trivial 2-group can have nontrivial sub-2-groups.
I thought I answered this puzzle - but now it looks like I never got around to it! Whoops!
The answer is that the naive concept of "subcategory" is evil. The same holds for "sub-2-group", "sub-2-vector space" and so on.
What do I mean by the naive concept of subcategory? I mean the one where we say C is a subcategory of D if there is a functor
C -> D
that's one-to-one on objects and on morphisms.
This is clearly evil, because "one-to-one on objects" is defined in terms of equations between objects, and equations between objects are always evil. You should always use specified isomorphisms
But what do I mean by evil? This is a technical term here: it means "not invariant under equivalence of categories".
The point is that we want all true statements to remain true whenever we replace all the categories involved by equivalent categories. Life runs smoothly when this holds. It holds automatically
if all our concepts are invariant under equivalence - that is, non-evil. But if we accidentally introduce an "evil" concept into our repertoire, we have to be incredibly careful. If we're not,
various puzzles and "paradoxes" will emerge, in which equivalent categories start acting differently!
And, that's just what keeps happening when you (and perhaps even I) fling around words like "subcategory", "sub-2-group" and "sub-2-vector spaces", defined in a naive way.
The naive concept of "subcategory" is manifestly evil, since if C is a naive subcategory of D:
C -> D
and we have an equivalence
D -> D'
the composite
C -> D'
does not make C into a naive subcategory of D'.
So: instead of using naive, evil notions, we should use more sophisticated good ones.
What are some good notions that we can use to talk about a functor
C -> D?
I can think of three. How can we use them to say that C is a subcategory of D, but in a less naive way?
A separate problem, by the way, is that you seem to be counting 2-vector subspaces of a 2-vector space and taking that number seriously. Counting in this naive way is good for sets, but evil for
categories... and surely there should be something like a category of 2-vector subspaces of a given 2-vector space. I'm not sure how to define it - but surely that's what we want for Klein
We can think of ordinary subgroups as kernels of group homomorphisms.
Maybe we should think of sub-2-groups as suitably defined kernels of morphisms of 2-groups?
We can think of ordinary subgroups as kernels of group homomorphisms.
Normal subgroups, you mean.
I'm afraid David is being a bit... well, evil.
Thanks for pointing out my sins. I'll go flagellate myself with a category of nine isomorphism classes of tail.
What are some good notions that we can use to talk about a functor
C -> D?
I can think of three.
Snaffling a piece of your presence elsewhere in the blogosphere, here's one:
Makkai’s concept of an “anafunctor” F: C -> D, which assigns to each object in C not a specific object of D but only the universal property of an object in D.
I should think those Chicago notes contain more, perhaps in the appendix.
As for what about a functor wants to make you say it's a 'subcategory', there must be some construction from 2-category theory which is equivalent to 'monic' in 1-category theory. Something along
the lines of F: C-->D is a subcategory if for any two functors G,H: B-->C with FoG naturally equivalent to FoH, then G is nat. equiv to H.
If 2-vector spaces are classified up to equivalence by their Betti numbers b_0 and b_1, why not choose as representatives:
d:C^b_1 --> C^b_0, with d the zero map.
Might then 2-vector spaces of the form:
d: C^m --> C^n, with d again the zero map, and m <= b_1, n <= b_0, be representatives of bona fide 2-vector subspaces.
(Why is it 2-vector space rather than vector 2-space?)
David said:
"Normal subgroups, you mean."
Right. Thanks for correcting that.
and surely there should be something like a category of 2-vector subspaces of a given 2-vector space. I'm not sure how to define it - but surely that's what we want for Klein 2-geometry!
Doesn't section 3 of HDA6, where you talk about 2Vect as a 2-category equivalent to 2Term, contain the answers? Just look for all '2-monic' 2-morphisms into your 2-vector space.
John wrote:
I'm afraid David is being a bit... well, evil.
David wrote:
Thanks for pointing out my sins. I'll go flagellate myself with a category of nine isomorphism classes of tail.
It's probably better if you just quit being evil. For good. In real life this is hard, but in category theory it doesn't take being a saint: you just need to build a little mechanism into your
conscience that automatically avoids "equations between objects".
This mechanism will make little warning bells chime before you speak of a "subcategory" in the naive sense as one whose inclusion functor
C -> D
is (among other things) one-to-one on objects. Since this concept involves equality between objects, it's almost sure to be evil.
Of course, you also need to have a built-in mechanism that takes the concept "sub-thingie" and automatically translates it into "monomorphism between thingies". No naive "subsets" here, please!
Such concepts should be expressed in terms of arrows.
So, when you learn to shun your evil ways, before you say "subcategory" your conscience will cry "Wait! I mean something like monomorphism between categories!" And then it'll cry "Wait! How can I
formulate this notion without mentioning equations between objects?"
Okay, that concludes today's sermon.
Next, I was trying to nudge you towards some answers to these questions of conscience....
What are some good notions that we can use to talk about a functor
C -> D?
I can think of three.
David replied:
Snaffling a piece of your presence elsewhere in the blogosphere, here's one:
Makkai’s concept of an “anafunctor” F: C -> D, which assigns to each object in C not a specific object of D but only the universal property of an object in D.
Anafunctors? We're talking about non-evil properties of functors right now, not replacements for functors. The three properties I was hinting at are well known to you:
essentially surjective.
These are all fundamentally "surjectivity" properties - I think we've talked about that before. But, "faithfulness" acts like an "injectivity" property, because while it really means (something
like) "surjective on equations between morphisms", this turns out to mean (something like) "injective on morphisms".
And that's good, because right now we're looking for injectivity properties, since we're trying to understand monomorphisms between categories in a non-evil way.
So, we know how to say
C -> D
is "injective on morphisms" in a good way - we say it's faithful.
The question is, how to say it's "injective on objects" in a good way.
In other words, what should "essentially injective" mean???
I should think those Chicago notes contain more, perhaps in the appendix.
My gosh, you're right! I think you're talking about Section 5.5, Epimorphisms and monomorphisms, starting on page 47 of the current draft.
However, this stuff gets a bit involved, and I'm not sure it even answers the key question at hand, namely:
What's a non-evil way to say a functor is something like "one-to-one on objects"?
As for what about a functor wants to make you say it's a 'subcategory', there must be some construction from 2-category theory which is equivalent to 'monic' in 1-category theory. Something along
the lines of F: C-->D is a subcategory if for any two functors G,H: B-->C with FoG naturally isomorphic to FoH, then G is nat. isomorphic to H.
Nice! Yes, that's a non-evil concept, and it's a bit like some things Mike talks about in Section 5.5 - but not quite the same. It might be just what we want, but I can imagine a very direct
lowbrow non-evil way to express the idea of "one-to-one on objects", too.
John Baez wrote:
The question is, how to say it's "injective on objects" in a good way.
In other words, what should "essentially injective" mean???
I'm not sure if that works, since if we have a skeleton skel(C) of a category C, the desired `subcategory' A \to C needs to be a subcategory of skel(C), as a category is equivalent to any of its
So we send each object in C to a chosen representative of its isomorphism class to get skel(C), and the composite A \to C \to skel(C) is certainly not injective on objects.
Street's old paper ``Two-dimensional sheaf theory''
has as one of its aims a 2-categorical version of a regular category (strict, mind you) and he defines an arrow (e.g. a functor) in a 2-category to be chronic when it is fully faithful and
injective on objects. In the bicategorical version of this, he drops the injective on objects. He also discusses so-called acute arrows which act like regular epimorphisms in a category (e.g. G \
to G/H for a normal subgroup H). One could then possibly define normal 2-subgroup to be the kernel of such a morphism in the bi-/2-category of 2-groups. I think Vitale and others have considered
what the kernel of a map between 2-groups is. Actually, now that I think about it, Vitale did a bit more that that in ``A Picard-Brauer exact sequence of categorical groups''.
Can we get categorical versions of the isomorphism theorems, with isomorphism replaced by equivalence? This is a very relevant point for the Klein part of this discussion (I apologise like the
rest for diversions) - how `transitive' are subfigures? How do we compare G//H to
for K normal in H normal in G? Or has this been done while I wasn't watching?
Back to a old topic, I always wondered what an equivalence relation on a category was, given an equivalence relation on a set is a groupoid. Here of course we are interested in the 2-equivalence
relation: ``Is in the same orbit of the given 2-group''. This sould be much easier to figure out than the general case.
Here's an example close to my heart: Take the one point topological space pt with a G-action (G a compact Lie group for a comfortable example). (Homotopy) quotient it. The map q: pt \to pt//G is
onto. If G is abelian, we can take pt of course to be the trivial group and we can get a group structure on pt//G (in fact a 2-group structure!). What is the kernel of q?
Also, does a normal 2-subgroup have to be invariant under the ``conjugate'' action, or only up to equivalence?
One other point: when we get to Baez-Crans 2-vector spaces, what happened to the Postnikov information (=associator) one gets when classifying 2-groups?
I can imagine a very direct lowbrow non-evil way to express the idea of "one-to-one on objects", too.
Something like F: C-->D is one-to-one on objects whenever, for any pair of objects X,Y in C, if F(X) and F(Y) are isomorphic in D, then X and Y are isomorphic in C.
Might that be called 'full on isomorphisms'? Pause for quick Google search. Aha!
"...we call a functor U :A -->C pseudomonic if it is faithful and if, moreover,it is full on isomorphisms: the latter means that any invertible h : UA --> UA' in C is Ug for some (necessarily
unique) g : A --> A' in A, which by an easy argument must itself be invertible." p. 230 of this.
dm roberts said:
I'm not sure if that works, since if we have a skeleton skel(C) of a category C, the desired `subcategory' A \to C needs to be a subcategory of skel(C), as a category is equivalent to any of its
So we send each object in C to a chosen representative of its isomorphism class to get skel(C), and the composite A \to C \to skel(C) is certainly not injective on objects.
Isn't this precisely why John said essentially injective on objects? The pseudomonic construction seems to deal with your example.
Does it make life easier working with skel(2Vect), along the lines of:
d:C^b_1 --> C^b_0, with d the zero map, Betti numbers b_0 and b_1.
We also need to make sure that the image of our monic functor is indeed a category. The concept of essential injectivity must be strong enough to ensure that.
For, in general, the image of a functor is not a category, because the image of morphisms may become composable while the pre-images were not.
I don't know if this is terribly relevant to the present discussion. But we were concerned with this issue when I visited Zoran Skoda and Igor Bakovic in Zagreb a while back, where we tried to
understand coequalizers in Cat in order to understand associated 2-bundles.
I am aware of this paper which deals with the issue by passing, essentially, to the smallest subcategory which contains the image of a given functor.
We also need to make sure that the image of our monic functor is indeed a category. The concept of essential injectivity must be strong enough to ensure that.
For, in general, the image of a functor is not a category, because the image of morphisms may become composable while the pre-images were not.
Doesn't the faithfulness condition handle this? If the images compose, then there is an arrow, not necessarily the composition, whose image is the composition.
What are the pseudomonics into
d:C^b_1 --> C^b_0, with d the zero map ?
As all maps are isos, we need a full and faithful functor, which must come from 2-vector spaces with 0th Betti number less than or equal to b_0, and 1st Betti number equal to b_1. If correct,
isn't this wrong:
Also nice is that whenever we have a nontrivial "2-vector subspace" W in V, we get an inclusion
PW -> PV
These give "figures" like projective points, lines, etc. in our projective 2-space. But now the types of figures are indexed not just by dimension, but by two Betti numbers!
I'm intrigued why you were called "evil"! Was it something you said? Wrote? Thought?
Don't worry, this is a blog carefully restricted to pleasant people. If you search for where John Baez says,
I'm afraid David is being a bit... well, evil,
you'll see he meant it in a technical sense:
But what do I mean by evil? This is a technical term here: it means "not invariant under equivalence of categories".
See, David? I knew that if you let me call you "evil", your blog would get some new readers. Blogs thrive on conflict; we're too nice to actually fight, so we have to invent technical terms that
make it look like we're fighting.
If you search for where John Baez says,
"I'm afraid David is being a bit... well, evil"
you'll see he meant it in a technical sense:
"But what do I mean by evil? This is a technical term here: it means "not invariant under equivalence of categories"."
Yes, I'm afraid David is not invariant under equivalence of categories.
Seriously, the fascinating thing about categorification is that by deliberately excluding certain syntactically well-formed concepts (those involving equations between objects), one obtains a
more interesting theory! At present, we mainly accomplish this by moral suasion - arguing that certain concepts are "good" and others "evil". I'm highlighting this in a jokey way by making "evil"
into a precise technical term.
But, it may not be satisfactory in the long run to exclude certain concepts by moral pressure. The logician Michael Makkai has grabbed the bull by the horns and made infinity-categories into a
new "foundation for mathematics" in which there are no equations! This system is called FOLDS, or first-order logic with dependent sorts, and David and I saw Makkai explain it at the IMA
conference on n-categories a couple summers ago. In this system you just can't say anything evil. It may never catch on, but it's a truly bold conception.
John wrote:
I can imagine a very direct lowbrow non-evil way to express the idea of "one-to-one on objects", too.
David wrote:
Something like F: C-->D is one-to-one on objects whenever, for any pair of objects X,Y in C, if F(X) and F(Y) are isomorphic in D, then X and Y are isomorphic in C.
Right! That's one straightforward way to weaken the definition of "one-to-one on objects". Jim calls such a functor essentially injective.
David wrote:
Might that be called 'full on isomorphisms'? Pause for quick Google search. Aha!
Oh, interesting. But notice,
"full on isomorphisms" does not mean the same thing as "essentially injective" - it's a stronger property:
"...we call a functor U :A -->C pseudomonic if it is faithful and if, moreover,it is full on isomorphisms: the latter means that any invertible h : UA --> UA' in C is Ug for some (necessarily
unique) g : A --> A' in A, which by an easy argument must itself be invertible.
You see, he's not just saying "if UA and UA' are isomorphic, then A and A' are isomorphic". He's saying every isomorphism between UA and UA' is the image of an isomorphism between A and A'!
I don't feel very knowledgeable about this stuff. So, it's quite possible that "full on isomorphisms" is the notion we really want, not "essentially injective".
In general, you can do a lot more if you talk about specific isomorphisms rather than "isomorphicness". It's really helpful to get rid of the existential quantifier built into the latter concept.
This suggests that "full on isomorphisms" is more useful than "essential injectivity".
So does the fact that someone already decided to define pseudomonic to mean
faithful and full on isomorphisms!
So, I need to do some reading and thinking, but we might wind up using pseudomonics instead of naive, evil "subcategories".
Urs wrote:
We also need to make sure that the image of our monic functor is indeed a category.
The naive concept of image of a functor
F: C -> D
is evil, since it contains those objects that are equal to objects of the form F(c). An object isomorphic to one in the image might not be in the image.
There are various non-evil substitutes for the notion of image; Toby uses the "full image" in his notes on properties, structure and stuff, and this is a category - but it's not what we really
want here.
I need to think about this a little bit....
Anyway, we're making progress!
Urs wrote:
... in general, the image of a functor is not a category, because the image of morphisms may become composable while the pre-images were not.
Right. I claim that annoying behavior like this is to be expected, since "image" is an evil concept. But in some cases, the image will be a category.
Doesn't the faithfulness condition handle this? If the images compose, then there is an arrow, not necessarily the composition, whose image is the composition.
It sounds to me like you're using fullness here, not faithfulness. The image of a full functor is a subcategory for the reason you said. Remember, a functor is full if any morphism between
objects in the image is in the image.
I think we can see that the image of a pseudomonic doesn't suffer from the problem Urs mentions. Suppose
F: C -> D
is pseudomonic: that is, faithful and full on isomorphisms. Suppose
f: d -> d'
f': d' -> d''
are morphisms in the image of F. Why is their composite in the image of F?
We have
f = F(g)
f' = F(g')
but as Urs points out, there's a problem! g and g' might not be composable:
g: c -> c'
g: c'' -> c'''
but we may not have c' = c''. We only know
F(c') = d' = F(c'')
But now we use the fact that F is full on isomorphisms! There's an isomorphism
1: F(c') -> F(c'')
so this must be the image of some morphism
h: c' -> c''
This h bridges the annoying gap between c' and c''. Now look at
ghg': c -> c'''
where I'm composing morphisms left-to-right. We have
F(ghg') = F(g)F(h)F(g') = f1f' = ff'
So, ff' is indeed in the image of F! Hurrah! QED!
Now let's do a little postmortem on this proof.
First, we only needed the fact that F was full on isomorphisms. Faithfulness was irrelevant.
Second, while "full on isomorphisms" gets the job done, I don't think the weaker "essential injectivity" would work. We need to take a specific isomorphism in D:
1: F(c') -> F(c'')
and find a morphism in C that maps to it. It would not suffice to say "since F(c') and F(c'') are isomorphic, c' and c'' must be isomorphic."
This is the kind of thing I meant when I said it's more powerful to work with specific isomorphisms than "isomorphicness", i.e. mere existence of isomorphisms. I hadn't known it would be
important here, but it's a lesson one learns over and over - and we're learning it again here.
So, for better or worse:
Theorem: The image of a functor
F: C -> D
is a category if F is full on isomorphisms.
I'm sure this is already known by those wiser than I.
Note this theorem is stated in an evil manner, since it uses the concept of "image".
However, the concept of "image" is not actually evil when F is full on isomorphisms, since any object isomorphic to one in the image is again in the image!
So, pseudomonics seem like the right substitute for naively defined "subcategories".
Or, there could be more than one good substitute. I'm actually inclined to believe this, due to some reflections on properties, structure, stuff and eka-stuff. But, pseudomonics at least seem
like one good substitute.
D. M. Roberts wrote:
Can we get categorical versions of the isomorphism theorems, with isomorphism replaced by equivalence?
How do we compare G//H to
for K normal in H normal in G? Or has this been done while I wasn't watching?
No, we haven't gotten this far yet. We're still struggling to understand the notion of "sub-2-group"! That's how we got into studying various notions of "subcategory": first an evil notion
proposed by David, and now an improved one, namely "pseudomonic functor".
So, now we can define a sub-2-group of a 2-group G to be a 2-group H equipped with a pseudomonic homomorphism
i: H -> G
In case you're wondering, a 2-group homomorphism was defined in HDA5 to be just a weak monoidal functor between 2-groups.
As Urs and you pointed out, we could have short-circuited some of this work if we'd only wanted to understand "normal" sub-2-groups, since we could define them as kernels of homomorphisms. But,
since we're doing Klein 2-geometry, we really need to understand quotients
where H is an arbitrary sub-2-group of G. These are the "2-spaces of geometrical figures" in Klein 2-geometry.
So, we're getting close to tackling questions like the one you mentioned, but we're not there yet.
One other point: when we get to Baez-Crans 2-vector spaces, what happened to the Postnikov information (=associator) one gets when classifying 2-groups?
Ah, good - a question I can answer. Short answer: it's trivial!
Long answer: 2-groups are secretly the same as connected pointed spaces with only pi_1 and pi_2 nontrivial - all higher homotopy groups vanishing. So, they're classified up to equivalence by:
pi_1: the group of isomorphism classes of objects.
pi_2: the group of automorphisms of the identity object.
an action of pi_1 on pi_2 by "conjugation".
an element of the 3rd cohomology group of pi_1 with coefficients in pi_2, coming from the associator.
(This is explained in HDA5 and, in a more hand-waving way, in my notes on n-categories and cohomology, in the section called "A Low-Dimensional Example".)
The last two bits of information - the action and the 3rd cohomology class - are called Postnikov data. What happens if we make them trivial - make the action trivial and the cohomology class
vanish? Suppose just for good measure that we also make pi_1 abelian! What do we get?
Then we get a specially simple sort of 2-group, which is secretly just a 2-term chain complex of abelian groups!
2-vector spaces are the special case of this where our abelian groups are vector spaces.
All this generalizes massively! Chain complexes of abelian groups are secretly just connected pointed spaces with pi_1 abelian and all Postnikov invariants trivial. Such spaces are just products
of Eilenberg-Mac Lane spaces:
K(A,1) x K(A',2) x K(A'',3) x ...
This is how homological algebra sits inside homotopy theory!
I could talk a lot more about this, but I'll resist.
David wrote:
What is the proper definition of an abelian/soluble/nilpotent/simple 2-group?
I've been meaning to say something about this for a long time. I already said that 2-groups come in 3 amounts of abelianness: 2-groups, braided 2-groups and symmetric 2-groups. In general,
n-groups come in n+1 different flavors, increasingly abelian as we march down the periodic table.
But what about "simple" n-groups?
There really are no simple n-groups except for simple 1-groups - that is, ordinary simple groups. By definition, a "simple" gadget has no nontrivial quotients. We can always form a quotient of an
n-group which is an (n-1)-group, by decategorifying it. This quotient is nontrivial except when n = 1.
This leads to the theory of Postnikov towers, explained in my lecture notes. Using this theory, we can describe general n-groups as built up from simple groups, "glued together" using group
The most familiar case is how we build up arbitrary groups from simple groups via iterated extensions. Extensions are classified using (nonabelian) 2nd cohomology. To build up n-groups, we need
higher cohomology classes, up to the (n+1)st cohomology. For 2-groups, a 3rd cohomology class describes the associator - I described this in my last post.
So, the classification of finite simple groups is the first step on the road to classifying all finite n-groups. This road may well be too long and twisty for humans to ever see the end of it.
Even classifying all finite groups is beyond us! But, we can still get lots of useful information by trying to understand how general n-groups are built up out of simple groups.
Wow, everyone's been busy overnight (for me ;)
So pseudomonic ensures when we pass to isomorphism classes we get everything, with assuming the non-isomorphisms have a preimage.
In the paper of Vitale I mentioned earlier (since now I've had another look) he gets up to exact sequences and all the stuff one does in (semi)abelian categories but now for 2-groups.
There is a not too difficult characterisation of 2-group homomorphisms (full, faithful, eso etc) in terms of what the homotopy groups and the induced maps do.
For instance, F:G \to H is
1)eso iff pi_0(F) is surjective,
2)faithful iff pi_1(F) is injective,
3)full iff pi_0(F) is injective and pi_1(F) is surjective,
4)an equivalence iff pi_1(F) and pi_0(F) are isomorphisms
I'm thinking of G,H as monoidal cats here, not one-object 2-cats.
Since we classify 2-groups up to these homotopy groups (with some Postnikov data) how do we say `pseudomonic' in this language, or have we forgotten too much information. Well, since 2-groups are
groupoids, all morphisms are isos, so pseudomonic is an overly large tool for the job.
In this system you just can't say anything evil. It may never catch on, but it's a truly bold conception.
Once we get it to work in math, all we'll need is to find a similar language for the rest of life. Trouble is the relation being saying and doing is somewhat different there.
After my last posts, and the realisation while watching a film that I'd made all kinds of mistakes, as John pointed out, I was going on happily to think about the automorphism 2-group of a
2-vector space (the film being a bit dull).
Now, I return and pseudomonics are flavour of the month. So how about the problem I raised:
What are the pseudomonics into
d:C^b_1 --> C^b_0, with d the zero map
As all maps are isos, we need a full and faithful functor, which must come from 2-vector spaces with 0th Betti number less than or equal to b_0, and 1st Betti number equal to b_1.?
In general, doesn't this make the range of sub-2-groups of a 2-group rather boring? For instance, that example from way back about symmetries of two triangles, we were looking at a 2-group with
72 objects and 72 x 36 morphisms, and thought we were considering a sub-2-group with just identity morphisms. According to the pseudomonic story it's not a sub-2-group. It wasn't even essentially
injective on objects.
David, John - Aha. I see now. It was just a trick to get more people sucked into the categories :-)
we could have short-circuited some of this work if we'd only wanted to understand "normal" sub-2-groups, since we could define them as kernels of homomorphisms.
Do we know that the inclusion of a kernel of a 2-group homomorphism is pseudomonic (or even injective on objects)? Is the definition of kernel fixed? Is there a notion of 'essential kernel'?
The really interesting puzzle is how a boring 2-group can seem interesting: for example, how a weakly trivial 2-group can have nontrivial sub-2-groups.
I thought I answered this puzzle - but now it looks like I never got around to it! Whoops!
Now you answer it by saying that evil has intruded. But in the original context of the question back in July, you pointed out:
When people tried to categorify the theory of K-bundles, without quite knowing at first what they were doing, they basically invented the theory of AUT(K)-2-bundles - but under a different guise:
they called it the theory of "nonabelian K-gerbes".
and that for some K, AUT(K) is equivalent to K<-K, and so trivial. How can a trivial 2-group be important?
I knew one needed fortitude to pursue mathematical research, but now I'm feeling that need. 4 months in and we're still wondering what a sub-2-group is. Is there any glimmer of gold ahead, or
will it be iron pyrites for the foreseeable future?
David wrote:
I knew one needed fortitude to pursue mathematical research, but now I'm feeling that need. 4 months in and we're still wondering what a sub-2-group is. Is there any glimmer of gold ahead, or
will it be iron pyrites for the foreseeable future?
Welcome to mathematics! Remember how I wondered in July if we'd have the energy to stick with this stuff?
Well, I know I do have the energy - I'm really happy about how things are going now. If you get tired, don't feel bad. You can quit if you like, now: you helped me over the hump, and that's all I
could ask.
Now we know what projective 2-geometry is! Even better, we've quit fiddling around and started doing some serious work - like figuring out what a sub-2-group really is. It's subtler than one
would have guessed! Before, we were just sort of crossing our fingers and hoping everything would work more or less like the decategorified case. That's good for starters, but now we've moved
beyond that stage. Lemmas are starting to accumulate; we're sifting through candidate definitions - "essentially injective" versus "full on isomorphisms", that kind of stuff. In short: it's not
just dreaming anymore; it's becoming a real subject, with ties to homological algebra (chain complexes), homotopy theory (Postnikov towers), algebraic geometry (those projective 2-spaces) and
more. In another year or two, something impressive might actually happen.
This is how it always goes. Perhaps you're think ugh, it's getting to be a technical mess when I'm think hey, there are some really interesting questions here to straighten out - it's not all
It's really late now; tomorrow I'll tell you some stuff about sub-2-groups.
Well it wasn't quite up there with Henry V's speech on the eve of Agincourt, but it'll do. Anyway, I wouldn't miss this trip for the world.
Well it wasn't quite up there with Henry V's speech on the eve of Agincourt, but it'll do.
Well, the troops being rallied aren't about to get riddled with arrows, either.
Or maybe they are....
I'm no longer sure "pseudomonics" are the right substitute for "monics" when we go from groups to 2-groups. I spent several hours at that Starbucks, cranking out math in a caffeinated frenzy, so
I'll try to briefly summarize what I found. But first, I want to emphasize some excellent points D. M. Roberts made.
For starters, if we think of a 2-group as a category, every morphism in it is an isomorphism. So, for maps between 2-groups, "full on isomorphisms" just means "full", and "pseudomonic" means
"full and faithful". Fewer nuances to worry about - good!
Second, when we're trying to understand maps between 2-groups, we should focus on data that's invariant under equivalence. As I mentioned a while back, a 2-group is known up to equivalence if we
know its pi_1, its pi_2, the action of pi_1 on pi_2, and a 3rd cohomology class. Similarly, a map between 2-groups
F: G -> H
is known up to equivalence if we know the induced maps
pi_1(F): pi_1(G) -> pi_1(H)
pi_2(F): pi_2(G) -> pi_2(H)
So, it should not at all be surprising that interesting properties of F can be phrased in this language. D. M. Roberts gave us a handy dictionary. Let me flesh it out a bit, and translate it into
my favored numbering scheme:
F is essentially surjective iff pi_1(F) is surjective.
F is essentially injective iff pi_1(F) is injective.
F is full on automorphisms if pi_2(F) is surjective.
F is faithful iff pi_2(F) is injective.
And some secondary notions:
F is full iff pi_1(F) is injective and pi_2(F) is surjective.
F is an equivalence iff pi_1(F) and pi_2(F) are isomorphisms
The only one of these that seemed tricky to me was "full". Remember, F is full if given a morphism
f: F(x) -> F(y)
we have
f = F(g)
for some
g: x -> y
We say F is full on automorphisms if this holds in the special case where x = y and f is an automorphism.
It's just a matter of definition-chasing to see that if a functor F between groupoids is full, it's full on automorphisms and also essentially injective.
Conversely, a functor F between groupoids is full if it's full on automorphisms and also essentially injective!
After all, F is "essentially injective" iff whenever there's some isomorphism between F(x) and F(y), there's some isomorphism between x and y. When we combine this with "full on automorphisms",
we see that for any specific isomorphism
f: F(x) -> F(y)
we can find an isomorphism
g: x -> y
mapping to it. Why? Well, once we know x and y are isomorphic, there's no real distinction between x and y, so "full on automorphisms of x" is really the same as "full on isomorphisms between x
and y".
Now, what about sub-2-groups?
Well, it's tempting to define this concept in terms of the ones we've just listed. If we follow our gut instincts, we might say a 2-group homomorphism
F: G -> H
exhibits G as a sub-2-group of H if pi_1(F) and pi_2(F) are both injective. By Roberts' dictionary, this means F is essentially injective and faithful.
This is not the same as "pseudomonic", which in this context means "full and faithful" or essentially injective, faithful and full on automorphisms.
See the culprit? Being "full on automorphisms" is a kind of surjectivity, not a kind of injectivity.
It's too bad the comments here aren't dated, now that this is becoming a kind of communal research diary. Oh well: let it be noted that on Sunday, July 13th I thought about the following things
in the Starbucks right next to the Nine Zigzag Bridge.
I was desperately trying to understand sub-2-groups. So, I thought: in Klein geometry, the conceptual meaning of "subgroup" is really "stabilizer of some point in a set on which a group acts".
So, let's take a 2-group G acting on a category X, and let's study the the stabilizer of some object x in X. Whatever this stabilizer is like, maybe this should become the definition of a
(Or, maybe not - there are also stabilizers of things more complicated and interesting than a mere object. But never mind! - it's still an interesting exercise.)
Of course we need to define the stabilizer, say Stab(x). There's an obvious way to do this if you're careful not to be evil. I'll just sketch it.
The stabilizer Stab(x) is a 2-group with the following objects and morphisms. An object of Stab(x) is an object g of G together with an isomorphism
a: gx -> x
Nota bene: we're not evilly demanding that gx = x; we're specifying an isomorphism between them!
A morphism of Stab(x), say
g, a: gx -> x
g', a': g'x -> x
is a morphism f: g -> g' in G making the obvious triangle commute. Namely,
a: gx -> x
should equal the composite of
fx: gx -> g'x
a': g'x -> x.
It really looks much prettier as a triangle!
With some work one makes Stab(x) into a 2-group - I didn't check everything here, but I'm following the tao of mathematics so I'm sure everything works, even when G is a weak 2-group and its
action on X is also weak - the general case. I also feel sure we get a 2-group homomorphism
i: Stab(x) -> G
By the philosophy I described, this "inclusion" of Stab(x) in G should be a great example of a sub-2-group!
But, even if the above went by in a blur, you'll surely note that objects of Stab(x) are objects of G equipped with extra structure!
That's a bit scary, since it means we're "losing something" when we map from Stab(x) to G - not what we'd naively expect when dealing with "the inclusion of a sub-2-group"!
But, what are we losing, exactly? The point is that
i: Stab(x) -> G
"forgets extra structure". There can be lots of different ways to make an object of G into an object of Stab(x) - lots of different ways for a guy in a 2-group to stabilize a guy in a category
it's acting on! Or, there can be no way at all! Choosing such a way is choosing extra structure.
If one is familiar with the yoga of properties, structure and stuff, one knows that "forgetting extra structure" means our functor is faithful, but it can fail to be full or essentially
By D. M. Roberts' handy chart, as polished by yours truly:
i is faithful: pi_2(i) is injective. YES.
i is essentially surjective: pi_1(i) is surjective. MAYBE NOT.
i is full: pi_1(F) is injective and pi_2(F) is surjective. MAYBE NOT.
We should not be surprised that something about i fails to be surjective. We may be surprised if something about it fails to be injective - since it's the very role model of an "inclusion of a
So, the only surprising thing on the list is that pi_1(i) may not be injective. In other word: i may not be "essentially injective"! Objects that are not isomorphic in the sub-2-group may become
so in the 2-group!
But this is not really surprising if you think about it. When we go from a sub-2-group to a 2-group, we can throw in new morphisms, which can make objects isomorphic that weren't before.
Moral: the inclusion of a sub-2-group
i: H -> G
had damn well better be faithful (injective on pi_2), but it might not be essentially injective (injective on pi_1).
I learned something else about
i: Stab(x) -> G
too! Namely, it's a discrete fibration of groupoids. Any morphism in G lifts uniquely to Stab(x) once we've lifted its source.
The fact that
i: Stab(x) -> G
is a fibration is nice - but in a sense no big deal, since any functor can be "improved" to be a fibration. The fact that it's a discrete fibration - the uniqueness of the lifting - is more
Okay, that's basically it for Sunday's Starbucks session. I'm sure it's too much for almost everyone!
I should have made clear the `dictionary' is not mine but Vitale's.
Back to an old question,
david wrote:
Can 2-groups be defined in terms of generators and relations?
The free groupoid on a graph + free group on the set of vertices should give us something like a free strict 2-group. the homotopy quotient by some sub2-group should give us what we want. So what
is a 2-group of relations? Not equalities, but isos between objects (we have them) and a relation on morphisms.
On quotienting, it's almost (trying not to be evil) like we make some objects equal, keep the maps between them as automorphisms and identify some morphisms as the same. That is very hand-waving
and apologies for it. I feel like this operation is like in homotopy when one when one contracts a subcomplex (objects becoming equal) or throws cells in to fill up unwanted spheres - in this
case we are in very low dimensions, so circles and disks (morphisms becoming equal). Or am I completely wrong?
I can't think immediately of some homotopy group explanation/long exact sequence where one gets this stuff from.
It's too bad the comments here aren't dated, now that this is becoming a kind of communal research diary. Oh well: let it be noted that on Sunday, July 13th I thought about the following things
in the Starbucks right next to the Nine Zigzag Bridge.
But they are dated, and just as well as you got the month wrong! I hope you didn't doing any carving into the Nine Zigzag Bridge
david wrote (a while ago):
Do we know that the inclusion of a kernel of a 2-group homomorphism is pseudomonic (or even injective on objects)? Is the definition of kernel fixed? Is there a notion of 'essential kernel'?
Perhaps this was too obvious, but here we go: is the kernel of a 2-group homomorphism F:G \to H the homotopy fibre (call it K) over the tensor unit I? (thinking of the beasties as monoidal cats -
otherwise as the fibre over the identity of the unique object if they were bigroupoids).
That is, objects of K are objects g in G such that F(g) is isomorphic to I, and morphisms are(iso)morphisms f in G such that F(f) = Id_I.
I haven't time to check the properties of i:K \to G, but it is certainly a truly injective functor. If h is iso to g in G and g is in K then h is in K, so we get whole isomorpism classes at
least. There should also be a universal property here so the kernel is defined only up to equivalence, but like when we talk about pullbacks, this seems to be a nice explicit description.
One other point, echoing the point that an n-vector space is a n-term chain complex of vector spaces, I seem to recall that a strict n-group should be a simplicial group with only n-terms of its
Moore complex non-zero. I don't want to define the Moore complex here (it's nasty), but to say the least, it gives a chain complex of non-abelian groups (the construction is such that all images
are normal). For a 2-group, the crossed module G_1 \to G_0 is that Moore complex.
Moral: the inclusion of a sub-2-group
i: H -> G
had damn well better be faithful (injective on pi_2), but it might not be essentially injective (injective on pi_1).
Just to get this straight, let's return to my original sin, where evil first became apparent in Paradise.
I said:
might it be that although 2-vector spaces are classified up to equivalence by 2 natural numbers, that it makes a difference to their 2-vector subspaces? E.g,
1: C -> C
1: {0} -> {0}
are equivalent, but the former has a nontrivial sub 2-space.
I was referring to the 2-vector space i:{0} -> C, which appeared to be a subspace of the former but not the latter. Now, according to the current definition, as this subspace is clearly faithful
into C -> C, it must be a subspace. So it had better also be a subspace of {0} -> {0}. And, Lo and Behold, it is!
Hmm, any discrete 2-vector space is a sub 2-vector space of the trivial one! The only criterion to satisfy for being a sub 2-vector space of a 2-vector space with Betti numbers (b_0, b_1) is to
have first Betti number no larger than b_1.
Hmmmm, but then can't we have (6, 0) as subspace of (3,0) and vice versa, without them being equivalent?
Either I'm making some obvious blunder, or the world of 2-groups is quite strange.
John wrote:
"[...] now that this is becoming a kind of communal research diary."
Yes, it's great. This is the sort of web discussion that I like a lot.
Since you like it, too, and if we all feel that the software running this is not particularly inspiring, maybe we should think about starting -- a new group blog.
Some blog with advanced software, hosted by maybe John, David, myself, possibly others - filled with lots of the sort of discussion that we apparently all enjoy.
If you don't like this idea, never mind. If it sounds at least interesting, then the most immediate thing I could do about this is to ask Jacques Distler if he would be willing to set up and
administrate such a blog for us.
Jacques originally offered to set up the string coffee table, which runs the same software as his personal blog, when he heard me and others talk about the desire for an online place for
disucssion of string theory.
You are all probably aware how the story continued, the outcome being the somewhat strange situation we have now.
So, I could ask Jacques if he would maybe abandon the SCT in favor of a true group blog - research diary style.
That's just the most immediate option I can think of. If you like the idea of a group blog, but don't want to depend on a third party administrating it (although to me this is a feature, not a
bug) we could of course also try to set up the software ourselves.
What do you think?
Urs wrote:
Since you like it, too, and if we all feel that the software running this is not particularly inspiring, maybe we should think about starting -- a new group blog.
What do you think?
It sounds like an interesting idea, but I have various worries. Let's discuss it over email.
After some stewing and brewing, the ideas in the previous post have corrected themselves a bit.
That is, objects of K are objects g in G such that F(g) is isomorphic to I, and morphisms are(iso)morphisms f in G such that F(f) = Id_I.
Better is the following (recall F: G --> H is a 2-group homomorphism, K it's proposed kernel) :
Ob(K) = g in G and j_g an arrow of H such that h:F(g) --> I_H, for I_H the tensor unit in H
Arr(K) = a an arrow of G, a:g1 --> g2 such that F(a) form a commutative triangle with j_g1 and j_g2
Here (g1,j_g1) --a--> (g2,j_g2) is the arrow in K.
The inclusion map to G just forgets the extra structure given by the isomorphism j_g
This looks awfully similar to John's definition of the stabilizer of a element in a category with a G-action, and I too invoke the tao of mathematics (or possibly just a slightly more vigorous
hand-wave ;)
If we consider the category of subgroups of a given 2-group (well, should probably be a 2-category, and we haven't fully got a handle on sub2-groups yet) with inclusions the morphisms (and I
suppose transformations of inclusions too, to be safe) then I would imagine the above construction describes the kernel as its own stabilizer. Left and right mulitplication by an object of the
2-group are eqivalences by definition, and so isomorphisms
- x g: K --> Kg
g x -: K --> gK
in the (2-)cat of subgroups of G, and a:g1 --> g2 will give us a (natural) isomorphism between
- x g1 and - x g2,
g1 x - and g2 x -.
Can we take this further and say that all sub2-groups of a 2-group G are the stabilizers of the subcategories of G, the action given by mulitplication? Many stabilizers will be trivial (or
equivalent to trivial) - those of subcategories which aren't sub2-groups. This certainly works for the lattice of subsets of a honest group, and feels more Kleinian.
The other polished point is the free 2-group. Take a one-object 2-globular diagram (so a lot of edges from a vertex to itself with unoriented 2-cells thrown in). Take the free groupoid, or I
suppose the fundamental bigroupoid, of it. I'm not sure if the interchange law comes for free or not.
We can then throw in some more isomorphisms = relations, possibly using a double 2-category construction, but one which is still a legit 2-category (this is just a bit easier for me to
visualise). We get some empty tin cans and some full ones, and this should be done in a way such that the homotopy groups...
I just had this crazy realization: when I consider the homotopy groups of this free 2-group as above, what I'm doing is taking a 2D connected cell complex and forming its homology groups. This is
why I love maths!
For a finite complex, the homotopy groups will be free groups modulo the relation `are homologous'.
This also connects in a very nice way with the Betti numbers so far used.
Back when David was evil, he was worried that equivalent vector 2-spaces could have very different vector sub-2-spaces. For example,
1: C -> C
1: {0} -> {0}
are equivalent, but the former has a nontrivial sub-2-space, namely
i:{0} -> C
which is not a sub-2-space of the latter.
But now he's using a definition of "sub-2-group" where we say H is a sub-2-group of G if it's equipped with a faithful functor
i: H -> G
Now, according to the current definition, as i:{0} -> C is clearly faithful into C -> C, it must be a sub-2-space. So it had better also be a sub-2-space of {0} -> {0}. And, Lo and Behold, it is!
Of course this is just a consequence of using a non-evil definition of "sub-2-space". Non-evil means "invariant under equivalence", so of course any non-evil definition gives answers that are
invariant under equivalence.
But this does not mean we have the correct definition of sub-2-space.
Indeed, you're jumping the gun slightly. I hadn't quite gotten around to officially proclaiming a definition of sub-2-group. In my Starbucks session I observed some properties of "stabilizer
sub-2-groups" and guessed that these should be part of the definition of "sub-2-group". I also noticed some properties that shouldn't be part of the definition. But, I never claimed to have
discovered all the properties of stabilizer sub-2-groups.
So, I still have a chance of weaseling out of any paradoxes you throw my way: I can claim there's some extra clause in the definition of sub-2-group that saves the day.
Or, if that doesn't save me, I can pull another trick.
Note that we're talking about sub-2-groups for two different reasons here. First, so we can talk about stabilizer sub-2-groups, which are fundamental to Klein 2-geometry. Second, so we can talk
about vector sub-2-spaces, which are fundamental to projective 2-geometry.
It's great that we've got these two different reasons for thinking about sub-2-groups: it lets us study them from different angles. But, we can't be 100% sure that the same class of sub-2-groups
will be the right thing for both purposes. Sometimes notions "split in two" when you generalize them. When you categorify, you always have to keep in mind this possibility. So, maybe this is
what's going on.
(Of course I don't want to play that card prematurely - it's nicest when concepts categorify in a unified way, without fragmenting.)
Anyway, you're doing a great job of putting me on the hotseat, making me struggle to come up with a beautiful theory that's not "too weird" in one way or another. But, I don't feel you've got me
boxed in yet.
I'm glad you're being agonistic, not antagonistic. And, always keep in mind Google's corporate motto.
You know, before I go to bed I just want to emphasize that this "splitting of concepts as we categorify" is already famous - precisely in the realm we're struggling with now. The twin concepts of
"injective" and "surjective" for functions break apart into three famous concepts for functors: "faithful", "full" and "essentially surjective". The usual factorization of a function into a
surjection followed by an injection breaks apart into a three-way factorization, as explained in section 3 here. And, this turns out to be not a mess, but part of a beautiful pattern.
So, it's quite possible that slugging it out over the puzzle "what's the correct generalization of `injection' for 2-groups?" is missing the point. We see two important concepts staring us in the
injective on pi_1 - "essentially injective"
injective on pi_2 - "faithful"
And, it may be necessary to deploy both of these, either separately or in combination, in the right ways at the right times.
I'm glad you're being agonistic, not antagonistic.
The delights of dialectic. I've always wanted to live in a Lakatos dialogue. Actually, I think we're being far more respectful to each other than the characters in Proofs and Refutations.
injective on pi_1 - "essentially injective"
injective on pi_2 - "faithful"
And, it may be necessary to deploy both of these, either separately or in combination, in the right ways at the right times.
If you want to stick with your claim that 2-vector spaces can be classified by 2 Betti numbers, (b_0,b_1), and you insist on not being evil, then our current tools to decide about a potential
subspace (c_0,c_1) offers us:
(i) faithful requiring that c_1 <= b_1
(ii) essentially injective on objects requiring that c_0 <= b_0
(iii) full on isomorphisms requiring that c_1 >= b_1
I'll bet (i) always gets picked. Tediousness is hardly the right way to judge things, but I'd rather not have (iii) as with (i) we'd have to have c_1 = b_1.
(ii) would mean that i:{0} -> C is not a subspace of 1:C -> C, or that the 2-group with two objects and only identity arrows is not a sub-2-group of the 2-group with two objects and single arrows
between each pair.
One more question before I butt out:
I've seen a subobject in a category defined as an equivalence class of monics. Modulo the correct definition of injective for the case of 2-groups, would it be more correct to say a sub-2-group H
of G is an arrow H >--> G, or an equivalence class of arrows?
|
{"url":"https://math.ucr.edu/home/baez/corfield/2006/08/klein-2-geometry-iv.html","timestamp":"2024-11-14T23:19:50Z","content_type":"application/xhtml+xml","content_length":"156972","record_id":"<urn:uuid:4a16d75a-c39c-47c2-9ecf-e051ce862d71>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00691.warc.gz"}
|
Electronic configuration Of Elements - Chemistry, Class 11, Classification of Elements and Periodicity in Properties
(1) The names are derived directly from the atomic numbers using numerical root for 0 and numbers from 1-9 and adding the suffix ium. The roots for the numbers 0-9 are:
(2) In certain cases, the names are shortened.bi ium and tri ium are shortened to bium and trium and enn nil shortened to ennil.
(3) The symbol of the element is then obtained from the first letters of the roots of numbers which make up the atomic number of the element.
An electron in an atom is characterised by a set of four Quantum numbers( n, l, m and s) and the Principal quantum number (n) defines the main energy level known as the shell.
Location of any element in the periodic table tells us the quantum number( n and l) of the last orbital filled.
Electronic configuration of elements in period
Each period in the periodic table indicates the value of n for the outermost or the valence shell. The total number of elements in each period is twice the number of orbitals available in the energy
level that is being filled.
(1) The first period corresponds to the filling of electrons in the first energy shell i.e. (k shell),n=1.Since this energy shell has only 1 orbital i.e. 1s which can accommodate only 2 electrons,
therefore, first period has only 2 elements.
(2) The second period corresponds to the filling of electrons in the second energy shell (L shell) i.e. n=2.This shell has 4 orbitals( one 2s and three 2p) which can accommodate 8 electrons,
therefore second period contains 8 electron. It starts with Lithium (Z=3) and ends at neon (Z= 10).
(3) The third period corresponds to the filling of electron in the third shells, i.e. n=3.This shell has 9 orbitals ( one 3s, three 3p and five 3d) .3d orbital have even higher energy than 4s
orbital. Therefore 3d orbitals are filled only after filling of 4s orbital. Third period involves the filling of only 4 orbitals( one 3s and three 3p) and thus contains 8 elements. It starts with
sodium(Z=11) and ends at argon (Z= 18).
(4) The Fourth period corresponds to the filling of electrons in the fourth energy level, n=4. It starts with potassium( Z=19) and ends at calcium (Z= 20).
After the filling of 4s orbitals, the filling of five 3d orbitals begins since the energy of 3d orbital is lower than those of 4p orbitals but higher than that of 4s orbital. The filling of 4d and 4f
orbital does not occur in this period since their energies are higher than that of even 5s orbital. The filling of the 3d orbital starts from scandium( Z= 21) and ends at Zinc( Z= 30).These 10
elements constitute the 3d transition series.
The filling of 4p orbital begins at gallium( Z=31)and ends at krypton( Z=36) which has the outer electronic configuration as 4s^2 3d^10 4p^6 .In the 4th period, the filling of only 9 orbitals( one
4s, five 3d and three 4p ) occurs which can accommodate at the maximum 18 electrons. Therefore 4th period contain 18 electrons from potassium to Krypton.
(5) The fifth period also contains 18 elements since only 9 orbitals ( one 5s, five 4d and three 5p ) are available for filling with electrons. It begins with rubidium(Z= 37) in which one electron
enters 5s orbital. After the filling of 5s orbital, the filling of 4d orbital starts at yittrium (Z=39) and ends at cadmium (Z= 48).These ten elements constitute 4d transition series. Filling of 5p
orbitals starts at indium (Z= 49) and ends at xenon ( Z=54).
(6) The sixth period corresponds to the filling of 6th energy level i.e. n= 6.Only 16 orbitals( one 5s, five 4d and three 5p) are available for filling with electrons, therefore 6th period contains
32 elements. It begins with caesium(Z=55) in which one electron enters the 6s orbital and ends up with radon(Z=86) in which the filling of 6p orbital is complete. After the filling of 6s Orbital, the
next electron enters the 5d orbital and therefore the filling of seven 4f orbitals begins with Cerium(Z=58) and ends up with lutetium(Z=71).These 14 elements constitutes the first inner transition
series called lanthanides or lanthanoids.
Filling of 5d orbitals which started at lanthanum continuous from hafnium( Z=72) till it is filled at mercury(Z=80). These 10 elements constitutes the 5d- transition series. After the filling of 5d
orbitals, the filling of 6p orbitals starts at thallium(Z=81) and ends at the radon (Z=86).
(7) The seventh period corresponds to filling of 7th energy shells i.e. n=7. It also contain 32 elements corresponds to the filling of 16 orbitals(one 7s, seven 5f, five 6d and three 7p ).
After the filling of 7s orbital, the next two electrons enters the 6d orbitals and therefore the filling of seven 5f orbitals begin with proactinium(Z=91) and ends up with lawrencium(Z=103).
Thorium does not have any electron in the 5f orbital, yet get it is considered to be a f block element since its properties resemble more the f block element than the d block elements. These 14
elements from thorium(Z=90) to lawrencium(Z=103) constitute the second (or 5f) inner transition series which is called as actinides are actinoids.
Filling of 5d orbitals which started at actinum(Z=89) continues till it is completed at these Uub(Z=112).These 10 elements constitute the 6d transition series. The filling of 6d, orbital the filling
of 7p orbitals begins at Uut (Z= 118) which ends at Uut (Z=118) which belongs to noble gas family.
The first three periods containing 2,8,8 elements and are known as short periods while the next three periods containing 18 ,18, 32 elements are called Long periods
Group wise electronic configuration
The elements in the same group or vertical column have similar valence shell electron electronic configuration i.e. they have the same number of electrons in the outer orbitals and hence have similar
properties. Elements of group 1 all have ns^1 valence shell electronic configuration. Elements of group 17 all have ns^2 np^5 valence shell electronic configuration.
1. Bhak says
Thank you it helped a lot
2. Mukundha says
Thank you mam
3. Virendra Singh Tanwar says
We appreciate your devotional effort….
it supports,…..helps. a lot
|
{"url":"https://classnotes.org.in/class11/chemistry/classification-elements-periodicity-properties/electronic-configuration-elements-2/","timestamp":"2024-11-05T13:52:13Z","content_type":"text/html","content_length":"77340","record_id":"<urn:uuid:18edbfe8-4f5e-427e-bc8a-cdafec48cb35>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00573.warc.gz"}
|
This is an implementation of the PAGE09 model in the Julia programming language. It was created from the equations in Hope (2011), and then compared against the original Excel version of PAGE09.
Additional background information about the PAGE model can be found in Hope (2006).
The documentation for MimiPAGE2009.jl can be accessed here.
You need to install julia 1.6 or newer to run this model.
You first need to connect your julia installation with the central Mimi registry of Mimi models. This central registry is like a catalogue of models that use Mimi that is maintained by the Mimi
project. To add this registry, run the following command at the julia package REPL: `
pkg> registry add https://github.com/mimiframework/MimiRegistry.git
You only need to run this command once on a computer.
The next step is to install MimiPAGE2009.jl itself. You need to run the following command at the julia package REPL:
You probably also want to install the Mimi package into your julia environment, so that you can use some of the tools in there:
The model uses the Mimi framework and it is highly recommended to read the Mimi documentation first to understand the code structure. For starter code on running the model just once, see the code in
the file examples/main.jl.
The basic way to access a copy of the default MimiPAGE2009 model is the following:
using MimiPAGE2009
m = MimiPAGE2009.get_model()
Here is an example of computing the social cost of carbon with MimiPAGE2009. Note that the units of the returned value are dollars $/ton CO2.
using Mimi
using MimiPAGE2009
# Get the social cost of carbon in year 2020 from the default MimiPAGE2009 model:
scc = MimiPAGE2009.compute_scc(year = 2020)
# You can also compute the SCC from a modified version of a MimiPAGE2009 model:
m = MimiPAGE2009.get_model() # Get the default version of the MimiPAGE2009 model
update_param!(m, :ClimateTemperature, :tcr_transientresponse, 3) # Try a higher transient climate response value
scc = MimiPAGE2009.compute_scc(m, year=2020) # compute the scc from the modified model by passing it as the first argument to compute_scc
The first argument to the compute_scc function is a MimiPAGE2009 model, and it is an optional argument. If no model is provided, the default MimiPAGE2009 model will be used. There are also other
keyword arguments available to compute_scc. Note that the user must specify a year for the SCC calculation, but the rest of the keyword arguments have default values. Note that a pulse "in 2020"
produces a gradual increase from 2015-2020 (or whatever the preceeding period is), followed by a gradual decrease in emissions from 2020-2030 (or whatever that following period is). Emissions are
linearly interpolated between the points given by the years.
m = get_model(), # if no model provided, will use the default MimiPAGE2009 model
year = nothing, # user must specify an emission year for the SCC calculation
eta = nothing, # eta parameter for ramsey discounting representing the elasticity of marginal utility. If nothing is provided, the value of parameter :emuc_utiliyconvexity in the MimiPAGE2009 model is unchanged, which has a default value of 1.1666666667.
prtp = nothing, # pure rate of time preference parameter used for discounting. If nothing is provided, the value of parameter :ptp_timepreference in the MimiPAGE2009 model is unchanged, which has a default value of 1.0333333333%.
equity_weighting = true,
pulse_size = 100_000 # the pulse size in metric megatonnes of CO2 (Mtonne CO2) (see below for more details)
There is an additional function for computing the SCC that also returns the MarginalModel that was used to compute it. It returns these two values as a NamedTuple of the form (scc=scc, mm=mm). The
same keyword arguments from the compute_scc function are available for the compute_scc_mm function. Example:
using Mimi
using MimiPAGE2009
result = MimiPAGE2009.compute_scc_mm(year=2030, eta=0, prtp=0.025)
result.scc # returns the computed SCC value
result.mm # returns the Mimi MarginalModel
marginal_temp = result.mm[:ClimateTemperature, :rt_realizedtemperature] # marginal results from the marginal model can be accessed like this
By default, MimiPAGE2009 will calculate the SCC using a marginal emissions pulse of 100_000 metric megatonnes of CO2 (Mtonne CO2) spread over the years before and after year. Regardless of this pulse
size, the SCC will be returned in units of dollars per ton since it is normalized over this pulse size. This choice of pulse size and duration is a decision made based on experiments with stability
of results and moving from continuous to discretized equations, and can be found described further in the literature around PAGE.
If you wish to alter this pulse size, it is an optional keyword argument to the compute_scc function where pulse_size controls the size of the marginal emission pulse. For a deeper dive into the
machinery of this function, see the forum conversation here and the docstrings in compute_scc.jl.
Hope, Chris. The PAGE09 integrated assessment model: A technical description. Cambridge Judge Business School Working Paper, 2011, 4(11). Hope, Chris. The marginal impact of CO2 from PAGE2002: An
integrated assessment model incorporating the IPCC's five reasons for concern. Integrated Assessment, 2006, 6(1): 19‐56.
|
{"url":"https://juliapackages.com/p/mimipage2009","timestamp":"2024-11-04T00:44:58Z","content_type":"text/html","content_length":"80864","record_id":"<urn:uuid:4d4791ab-bef8-40dc-b55e-fe1b255ee8dc>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00579.warc.gz"}
|
Search for:
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge
during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
1. Free, publicly-accessible full text available March 1, 2026
2. Free, publicly-accessible full text available August 26, 2025
3. We examine the utility of data on active and vacant residential addresses to inform local and timely monitoring and assessments of how areas impacted by wildfires and extreme weather events more
broadly lose (or not) and subsequently recover (or not) their populations. Provided by the U.S. Postal Service to the U.S. Department of Housing and Urban Development and other users, these data
are an underutilized and potentially valuable tool to study population change in disasteraffected areas for at least three reasons. First, as they are aggregated to the ZIP + 4 level, they permit
highly local portraits of residential and, indirectly, of population change. Second, they are tabulated on a quarterly basis starting in 2010 through the most recent quarter, thereby allowing for
timely assessments than other data sources. Third, one mechanism of population change—namely, underlying changes in residential occupancies and vacancies—is explicit in the data. Our findings
show that these data are sufficient for detecting signals of residential and, indirectly, of population change during and after particularly damaging wildfires; however, there is also noticeable
variation across cases that requires further investigations into, for example, the guidance the U.S. Postal Services provides its postal offices and carriers to classify addresses as vacant.
more » « less
Free, publicly-accessible full text available August 1, 2025
4. Development of bioconjugation strategies to efficiently modify biomolecules is of key importance for fundamental and translational scientific studies. Cysteine S-arylation is an approach which is
becoming more popular due to generally rapid kinetics and high chemoselectivity, as well as the strong covalently bonded S-aryl linkage created in these processes. Organometallic approaches to
cysteine S-arylation have been explored that feature many advantages compared to their more traditional organic counterparts. In this Viewpoint, progress in the use of Au(III) and Pd(II)
oxidative addition (OA) complexes for stoichiometric cysteine S-arylation is presented and discussed. A focus is placed on understanding the rapid kinetics of these reactions under mild
conditions, as well as the ability to generate biomolecular heterostructures. Potential avenues for further exploration are addressed and usefulness of these methods to the practitioner are
emphasized in the discussion.
more » « less
Free, publicly-accessible full text available July 17, 2025
5. Ultra-high vacuum scanning tunneling microscopy (UHV-STM) was used to investigate two related molecules pulse-deposited onto Au(111) surfaces: indoline-2-carboxylic acid and proline
(pyrrolidine-2-carboxylic acid).
more » « less
Free, publicly-accessible full text available October 9, 2025
6. Planar magnetic microswimmers are well-suited for in vivo biomedical applications due to their cost-effective mass production through standard photolithography techniques. The precise control of
their motion in diverse environments is a critical aspect of their application. This study demonstrates the control of these swimmers individually and as a swarm, exploring navigation through
channels and showcasing their functional capabilities for future biomedical settings. We also introduce the capability of microswimmers for surface motion, complementing their traditional
fluid-based propulsion and extending their functionality. Our research reveals that microswimmers with varying magnetization directions exhibit unique trajectory patterns, enabling complex swarm
tasks. This study further delves into the behavior of these microswimmers in intricate environments, assessing their adaptability and potential for advanced applications. The findings suggest
that these microswimmers could be pivotal in areas such as targeted drug delivery and precision medical procedures, marking significant progress in the biomedical and micro-robotic fields and
offering new insights into their control and behavior in diverse environments.
more » « less
Free, publicly-accessible full text available June 27, 2025
7. Free, publicly-accessible full text available June 27, 2025
8. The utilization of visible light to mediate chemical reactions in fluid solutions has applications that range from solar fuel production to medicine and organic synthesis. These reactions are
typically initiated by electron transfer between a photoexcited dye molecule (a photosensitizer) and a redox-active quencher to yield radical pairs that are intimately associated within a solvent
cage. Many of these radicals undergo rapid thermodynamically favored “geminate” recombination and do not diffuse out of the solvent cage that surrounds them. Those that do escape the cage are
useful reagents that may undergo subsequent reactions important to the above-mentioned applications. The cage escape process and the factors that determine the yields remain poorly understood
despite decades of research motivated by their practical and fundamental importance. Herein, state-of-the-art research on light-induced electron transfer and cage escape that has appeared since
the seminal 1972 review by J. P. Lorand entitled “The Cage Effect” is reviewed. This review also provides some background for those new to the field and discusses the cage escape process of both
homolytic bond photodissociation and bimolecular light induced electron transfer reactions. The review concludes with some key goals and directions for future research that promise to elevate
this very vibrant field to even greater heights.
more » « less
Free, publicly-accessible full text available June 12, 2025
9. We study the distribution over measurement outcomes of noisy random quantum circuits in the regime of low fidelity, which corresponds to the setting where the computation experiences at least one
gate-level error with probability close to one. We model noise by adding a pair of weak, unital, single-qubit noise channels after each two-qubit gate, and we show that for typical random circuit
instances, correlations between the noisy output distribution$$p_{\text {noisy}}$$${p}_{\text{noisy}}$and the corresponding noiseless output distribution$$p_{\text {ideal}}$$${p}_{\text{ideal}}$
shrink exponentially with the expected number of gate-level errors. Specifically, the linear cross-entropy benchmarkFthat measures this correlation behaves as$$F=\text {exp}(-2s\epsilon \pm O(s\
epsilon ^2))$$$F=\text{exp}\left(-2sϵ±O\left(s{ϵ}^{2}\right)\right)$, where$$\epsilon $$$ϵ$is the probability of error per circuit location andsis the number of two-qubit gates. Furthermore, if
the noise is incoherent—for example, depolarizing or dephasing noise—the total variation distance between the noisy output distribution$$p_{\text {noisy}}$$${p}_{\text{noisy}}$and the uniform
distribution$$p_{\text {unif}}$$${p}_{\text{unif}}$decays at precisely the same rate. Consequently, the noisy output distribution can be approximated as$$p_{\text {noisy}}\approx Fp_{\text
{ideal}}+ (1-F)p_{\text {unif}}$$${p}_{\text{noisy}}\approx F{p}_{\text{ideal}}+\left(1-F\right){p}_{\text{unif}}$. In other words, although at least one local error occurs with
probability$$1-F$$$1-F$, the errors are scrambled by the random quantum circuit and can be treated as global white noise, contributing completely uniform output. Importantly, we upper bound the
average total variation error in this approximation by$$O(F\epsilon \sqrt{s})$$$O\left(Fϵ\sqrt{s}\right)$. Thus, the “white-noise approximation” is meaningful when$$\epsilon \sqrt{s} \ll 1$$$ϵ\
sqrt{s}\ll 1$, a quadratically weaker condition than the$$\epsilon s\ll 1$$$ϵs\ll 1$requirement to maintain high fidelity. The bound applies if the circuit size satisfies$$s \ge \Omega (n\log
(n))$$$s\ge \Omega \left(nlog\left(n\right)\right)$, which corresponds to onlylogarithmic depthcircuits, and if, additionally, the inverse error rate satisfies$$\epsilon ^{-1} \ge {\tilde{\Omega
}}(n)$$${ϵ}^{-1}\ge \stackrel{~}{\Omega }\left(n\right)$, which is needed to ensure errors are scrambled faster thanFdecays. The white-noise approximation is useful for salvaging the signal from
a noisy quantum computation; for example, it was an underlying assumption in complexity-theoretic arguments that noisy random quantum circuits cannot be efficiently sampled classically, even when
the fidelity is low. Our method is based on a map from second-moment quantities in random quantum circuits to expectation values of certain stochastic processes for which we compute upper and
lower bounds.
more » « less
10. We propose a novel deterministic method for preparing arbitrary quantum states. When our protocol is compiled into CNOT and arbitrary single-qubit gates, it prepares an$N$-dimensional state in
depth$O\left(\mathrm{log}\left(N\right)\right)$and$\mathit{\text{spacetime allocation}}$(a metric that accounts for the fact that oftentimes some ancilla qubits need not be active for the entire
circuit)$O\left(N\right)$, which are both optimal. When compiled into the$\left\{\mathrm{H},\mathrm{S},\mathrm{T},\mathrm{C}\mathrm{N}\mathrm{O}\mathrm{T}\right\}$gate set, we show that it
requires asymptotically fewer quantum resources than previous methods. Specifically, it prepares an arbitrary state up to error$ϵ$with optimal depth of$O\left(\mathrm{log}\left(N\right)+\mathrm
{log}\left(1/ϵ\right)\right)$and spacetime allocation$O\left(N\mathrm{log}\left(\mathrm{log}\left(N\right)/ϵ\right)\right)$, improving over$O\left(\mathrm{log}\left(N\right)\mathrm{log}\left(\
mathrm{log}\left(N\right)/ϵ\right)\right)$and$O\left(N\mathrm{log}\left(N/ϵ\right)\right)$, respectively. We illustrate how the reduced spacetime allocation of our protocol enables rapid
preparation of many disjoint states with only constant-factor ancilla overhead –$O\left(N\right)$ancilla qubits are reused efficiently to prepare a product state of$w$$N$-dimensional states in
depth$O\left(w+\mathrm{log}\left(N\right)\right)$rather than$O\left(w\mathrm{log}\left(N\right)\right)$, achieving effectively constant depth per state. We highlight several applications where
this ability would be useful, including quantum machine learning, Hamiltonian simulation, and solving linear systems of equations. We provide quantum circuit descriptions of our protocol,
detailed pseudocode, and gate-level implementation examples using Braket.
more » « less
Free, publicly-accessible full text available February 15, 2025
|
{"url":"https://par.nsf.gov/search/author:%22Alexander,%20M.%22","timestamp":"2024-11-05T16:44:27Z","content_type":"text/html","content_length":"311359","record_id":"<urn:uuid:4e1c7c04-3269-4457-b57b-c3beaddfc0e6>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00117.warc.gz"}
|
Lesson 5
Attributes of Other Quadrilaterals
Warm-up: Number Talk: Divide by 7 (10 minutes)
This Number Talk prompts students to rely on properties of operations and the relationship between multiplication and division to divide within 100. The reasoning here helps students develop fluency
in division.
• Display one expression.
• “Give me a signal when you have an answer and can explain how you got it.”
• 1 minute: quiet think time
• Record answers and strategy.
• Keep expressions and work displayed.
• Repeat with each expression.
Student Facing
Find the value of each expression mentally.
• \(70\div7\)
• \(77\div7\)
• \(63\div7\)
• \(56\div7\)
Activity Synthesis
• “How did you use facts you know to find facts you didn’t know?” (I used \(70\div7\) and thought about one more group to find \(77\div7\). I used \(70\div7\) and one less group to find \(63\div7\)
Activity 1: All the Ways (20 minutes)
The purpose of this activity is to deepen students’ understanding that a shape can belong to multiple categories because of its attributes. Students analyze shapes and determine all the ways that
each one could be named. The names may refer to a broad category such as triangle or quadrilateral, or a narrower subcategory such as rhombus or rectangle. As they name the different categories
students need to be precise both about the meaning of the categories and verifying the properties of the different shapes (MP6).
MLR8 Discussion Supports. Synthesis: For each observation that is shared, invite students to turn to a partner and restate what they heard using precise mathematical language.
Advances: Listening, Speaking
Engagement: Provide Access by Recruiting Interest. Leverage choice around perceived challenge. Invite students to select at least 4 of the 6 problems.
Supports accessibility for: Organization, Attention, Social-Emotional Skills
• Groups of 2
• “Look at the quadrilateral in the first problem. Work independently to circle all the names that you could use to describe the quadrilateral. Be prepared to share your reasoning.”
• 1 minute: independent work time
• “Discuss your responses and reasoning with your partner.”
• 2 minutes: partner discussion
• Share responses.
• “Complete the rest of the problems independently. Be prepared to explain your reasoning.”
• 3–5 minutes: independent work time
• “Now, discuss your answers with your partner. Be sure to explain your reasoning for each way you described the shape. Also, be sure to ask your partner if you have any questions about their
• 5–7 minutes: partner discussion
• Monitor for students who notice that some shapes can be described using multiple terms.
Student Facing
Select all the ways you could describe each shape. Be prepared to explain your reasoning.
1. triangle
2. quadrilateral
3. square
4. rhombus
5. rectangle
1. triangle
2. quadrilateral
3. hexagon
4. rhombus
5. rectangle
6. square
1. triangle
2. quadrilateral
3. pentagon
4. rhombus
5. rectangle
6. square
1. triangle
2. quadrilateral
3. hexagon
4. rhombus
5. rectangle
6. square
1. hexagon
2. quadrilateral
3. triangle
4. square
5. rectangle
6. rhombus
1. hexagon
2. quadrilateral
3. triangle
4. rhombus
5. rectangle
6. square
Advancing Student Thinking
If students use only one name for a shape that can be named in multiple ways, consider asking:
• “How did you describe the shape?”
• “Are there any other names that could be used to describe the shape?”
Activity Synthesis
• Select 1–2 students to share the terms they selected for each of the last four quadrilaterals and their reasoning.
• Consider asking:
□ “Who can restate _____’s reasoning in a different way?”
□ “Does anyone want to add on to _____’s reasoning?”
□ “Do you agree or disagree? Why?”
• “The last shape can be described with 4 of the choices. How is it possible that it can be described in so many ways?” (It is a quadrilateral because it has 4 sides. It is a rhombus because it’s a
quadrilateral with 4 sides that are the same length. It is a rectangle because it has 4 right angles and 2 pairs of sides that are the same length. It is a square because it has 4 sides that are
the same length and 4 right angles. The last three are more specific descriptions of a quadrilateral.)
Activity 2: Draw One That’s Not . . . (15 minutes)
The purpose of this activity is for students to apply what they know about the defining attributes of rectangles, rhombuses, and squares to draw shapes that are not those quadrilaterals. They use
geometric attributes to explain why their drawings meet the criteria.
• Groups of 2
• “Take a minute and think about how you could draw a shape for each one of these descriptions.”
• 1 minute: quiet think time
• “Now, work with your partner to draw a shape for each statement. Be ready to explain how you know each shape matches the description given.”
• 7–10 minutes: partner work time
Student Facing
1. Draw a quadrilateral that isn’t a square.
2. Draw a quadrilateral that isn’t a rhombus.
3. Draw a quadrilateral that isn’t a rectangle.
4. Draw as many quadrilaterals as you can that aren’t rhombuses, rectangles, or squares.
Activity Synthesis
• Select students to share their drawings and explanations for the first three problems.
• Highlight explanations that include the defining attributes of squares, rectangles, and rhombuses.
• Invite students to share as many different quadrilaterals as they can think of for the last problem. Display as many as possible.
Lesson Synthesis
“How has your thinking changed over the last few lessons about what a quadrilateral can look like?” (Before, when I thought of quadrilaterals, I thought of rectangles and squares, but now I know they
can look so different. Some have right angles and some don’t. Some have sides with equal length and some don’t. They all look really different even though they have some things in common.)
Cool-down: Describe It, Draw It (5 minutes)
Student Facing
In this section, we learned to sort shapes based on attributes such the number of sides, side lengths, and whether angles were right angles. We also sorted quadrilaterals and triangles into more
specific groups.
We learned that a shape can be named based on its attributes. For example:
• If a triangle has a right angle, then it is a right triangle.
• If a quadrilateral has 2 pairs of sides that are the same length and 4 right angles, then it is a rectangle.
• If a quadrilateral has sides that are all the same length, then it is a rhombus.
• If a quadrilateral has sides that are all the same length and 4 right angles, then it is a square.
|
{"url":"https://curriculum.illustrativemathematics.org/k5/teachers/grade-3/unit-7/lesson-5/lesson.html","timestamp":"2024-11-03T06:03:39Z","content_type":"text/html","content_length":"119569","record_id":"<urn:uuid:b6b86beb-2798-431f-970b-7d57c2fe8696>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00846.warc.gz"}
|
Magnetic field and temperature effect on the localization length of poly(dA)-poly(dT) DNA molecule
The probability of finding electrons along the DNA chain is described by localization length. We use tight-binding Hamiltonian approach, transfer matrix, and Gram-Schmidt orthogonalization method in
studying the localization length of poly(dA)-poly(dT) DNA molecule. The molecule being studied consists of 102 base pairs of adenine (A) and thymine (T) as well as backbone which consists of
phosphate and sugar. Electron hopping between backbone is allowed in this model. The effect of magnetic field on the localization length is studied by considering the magnetic field influence on
hopping electron in DNA in the form of Peierls phase factor. The effect of temperature is studied by considering the twisting vibration of DNA molecule. The result shows localization length decreases
when the temperature increase. On the other hand, when magnetic field increases localization length decreases in some energies.
Original language English
Title of host publication Proceedings of the 3rd International Symposium on Current Progress in Mathematics and Sciences 2017, ISCPMS 2017
Editors Ratna Yuniati, Terry Mart, Ivandini T. Anggraningrum, Djoko Triyono, Kiki A. Sugeng
Publisher American Institute of Physics Inc.
ISBN (Electronic) 9780735417410
Publication status Published - 22 Oct 2018
Event 3rd International Symposium on Current Progress in Mathematics and Sciences 2017, ISCPMS 2017 - Bali, Indonesia
Duration: 26 Jul 2017 → 27 Jul 2017
Publication series
Name AIP Conference Proceedings
Volume 2023
ISSN (Print) 0094-243X
ISSN (Electronic) 1551-7616
Conference 3rd International Symposium on Current Progress in Mathematics and Sciences 2017, ISCPMS 2017
Country/Territory Indonesia
City Bali
Period 26/07/17 → 27/07/17
• localization length
• magnetic field
• poly(dA)-poly(dT) DNA
• temperature
Dive into the research topics of 'Magnetic field and temperature effect on the localization length of poly(dA)-poly(dT) DNA molecule'. Together they form a unique fingerprint.
|
{"url":"https://scholar.ui.ac.id/en/publications/magnetic-field-and-temperature-effect-on-the-localization-length-","timestamp":"2024-11-04T14:59:34Z","content_type":"text/html","content_length":"57190","record_id":"<urn:uuid:535485ba-8950-478a-9427-e87619f75c7a>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00652.warc.gz"}
|
Symmetry and Counting | JustToThePointSymmetry and Counting
God decided to take the devil to court and settle their differences once and for all. When Satan heard of this, he grinned and said, -and where do you think you’re going to find a good fucking
1. If G is a group with identity element e and X is a set, then a (left) group action α of G on X is a function α: G x X → X that satisfies: α(e, x) = x, and α(g, α(h, x)) = α(gh, x) with α(g, x)
often shortened to g·x, hence (i) e·x = x, (ii) g·(h·x) = (gh·x). The group G is said to act on X. A set X together with an action of G is called a G-set.
2. If G is a group on a set X, x ∈ X, the orbit of an element x ∈ X is the set of elements that x can reach via the action of some element of the group, Orb[G](x) = {g·x | g ∈ G}.
3. For every element x ∈ X, the stabilizer subgroup of G with respect to x is the set of all elements in G that fix x: G[x] = stab[G](x) = {g ∈ G | g·x = x}.
4. Given a fixed group element g ∈ G, the stable set of g is the set of elements of X that are stabilized by g, that is, X[g] = {x ∈ X: g·x = x} ⊆ X.
For any sets X and Y, the set of all functions f : X → Y is denoted as Y^X = {f | f: X → Y}. Let G be a group acting on X. Y[X] has a natural G-set structure, that is, ·: G x Y[X] → Y[X], g ∈ G, f ∈
Y[X], g·f: X → Y is defined as the function g·f(x) = f(g·x) ∀x ∈ X.
Lemma. Let G be a group whose identity is e. Let X be a set and ·:G x X → X be a group action. Then, the set of orbits of X under the action of G, written as X/G or X[G], forms a partition of X.
Let’s define the relation x R[g] y or, more succinctly, x ~ y ↭ y ∈ Orb(x) ↭ ∃g ∈ G: y = g·x. It is an equivalent relation. ∀x, y, z ∈ X:
1. Reflexivity. x ~ x because ∃e ∈ G (G is a group): x = e·x (i)
2. Symmetry: x ~ y ↭ y ∈ Orb(x) ↭ ∃g ∈ G: y = g·x ⇒ g^-1·y = g^-1·(g·x) ⇒[(ii)’s axiom action] g^-1·y = x ⇒ ∃g^-1 ∈ G: x = g^-1·y ⇒ x ∈ Orb(y) ⇒ y ~ x.
3. Transitivity: x ~ y and y ~ z ⇒ ∃g, g’ ∈ G: y = g·x, z = g’·y = g’·(g·x) =[(ii)’s axiom action] (g’g)·x, that is, z = (g’g)·x ⇒ z ∈ Orb(x) ⇒ x ~ z.
The quotient set X/R[g], that is, the set of orbits of the group action forms a partition of X.
Notice that the equivalent class of x is [x] = {y ∈ X | x ~ y} = {y ∈ X | y ∈ Orb(x)} = {y ∈ X | ∃g ∈ G: y = g·x}
Lemma. Let x and y be two elements in X, and let g be a group element such that y = g·x. Then, the two stabilizer groups G[x] = Orb[G](x) and G[y] = Orb[G](y) are related by G[y] = gG[x]g^-1 . In
other words, the stabilizers of elements in the same orbit are conjugate to each other.
Assume that x and y are two elements in X, and let g be a group element such that y = g·x.
Let’s prove that G[y] ⊆ gG[x]g^-1. ∀h ∈ G[y] ↭ h·y = y ↭[y = g·x] h·(g·x) = g·x ⇒ [Applying g^-1 to both sides of this equality and taking (ii) into account yields…] (g^-1hg)·x = x ⇒ g^-1hg ∈ G[x] ⇒
h = g(g^-1hg)g^-1 ∈ gG[x]g^-1.
Next, let’s prove that gG[x]g^-1 ⊆ G[y]. y = g·x ⇒ x = g^-1·y. ∀h ∈ G[x] ↭ h·x = x ⇒ h·(g^-1·y) = g^-1·y ⇒ [Applying g to both sides of this equality yields…] (ghg^-1)·y = y ↭ ghg^-1 ∈ G[y] ∎
Let G be a group acting on a set X. We have demonstrated that the stabilizers of elements in the same orbit are conjugate to each other, that is, G[g·x] = gG[x]g^-1, and therefore they are isomorphic
to each other G[x] ≋ G[g·x] via an inner automorphism. In particular, if x and y are G-equivalent, then |G[x]| = |G[y]|
Let G be a group. For any g ∈ G, define the map c[g]: G → G, given by c[g](x) = gxg^-1. c[g] is an automorphism of G, and is termed as an inner automorphism.
Burnside’s Theorem. If G is a finite group acting on X, then the number k of G-orbits of X is $\frac{1}{|G|}\sum_{g∈G}|X_g|$
Consider the equation g·x = x of two variables g ∈ G, x ∈ X. How many solutions does it have?
For each g ∈ G, the number of such pairs is $|X_g|$, so it is $\sum_{g∈G}|X_g|$. For each x ∈ X, the number of such pairs is |G[x]|, so the total is $\sum_{x∈X}|G_x|$ and obviously $\sum_{g∈G}|X_g|=\
sum_{x∈X}|G_x|$ 🚀
The Fundamental Counting Principle Let G be a group acting on X and x an element of X. Then, the cardinality of an orbit is equal to the index of its stabilizer, that is, |O[x]| = [G:G[x]]
$\sum_{y∈\mathbb{O_x}}|G_y|$ =[|G[y]| = |G[x]| if y ∈ $\mathbb{O_x}$ (x and y are in the same orbit)] $\sum_{y∈\mathbb{O_x}}|G_x| = |\mathbb{O_x}||G_x|$ = [The Fundamental Counting Principle] = [G:G
[x]]|G[x]| = [By Lagrange’s Theorem] |G|. Therefore, $\sum_{y∈\mathbb{O_x}}|G_y|=|G|$
$\sum_{g∈G}|X_g|=🚀\sum_{x∈X}|G_x|$ = [Since we have just demonstrated that orbits partition X, we are going to have |G| as many times as orbits, that is, k] k|G| ⇒ k = $\frac{1}{|G|}\sum_{g∈G}|X_g|$
• Let X = {1, 2, 3, 4, 5} and G = {1, (13), (13)(25), (25)}. G naturally acts on X. There are three orbits, so X = {1, 3} ∪ {4} ∪ {2, 5}.
Alternatively, X[id] = X, X[(13)] = {2, 4, 5}, X[((13)(25))] = {4}, X[(25)] = {1, 3, 4}.
The number k of G-orbits of X is $\frac{1}{|G|}\sum_{g∈G}|X_g|=\frac{1}{4}(5+3+1+3)=3$
• Using two colors, say black and white, in how many ways can the vertices of a square be colored? It seems pretty obvious that 2^4 = 16. However, assuming that reflections and rotations are
allowed, that is, colors obtained from rigid motions are considered equivalent, so we have over-counted (Figure 2).
If Y = {1, 2, 3, 4}, then D[4] acts on Y in a natural way, and coloring of the vertices is a function f: Y → {B, W}. Let X = {B, W}^Y = {f| f: {1, 2, 3, 4} → {B, W}}, |X| = 16, and D[4] acts on X
naturally. Therefore, our original question is the same as asking how many orbits does D[4] have on the set of X?
|X[id]| = 2^4 = 16. |X[r]| =[(1234)] 2, |X[r]^2| =[(13)(24)] 4, |X[r]^3| =[(1432)] 2, |X[s]| =[(14)(23)] 4, |X[rs]| =[(24)] 8, |X[r^2s]| = 4, |X[r^3s]| = 8 (Figure 3).
|X[g]| = 2^a where 2 = |{B, W}|. a = number of cycles in the cycle decomposition of each permutation because vertices in the same cycle need to share the same color if the square was to be fixed.
Id r r^2 r^3 s rs r^2s r^3s
1 (1)(2)(3)(4) (1234) (13)(24) (1432) (14)(23) (1)(24)(3) (12)(34) (13)(2)(4)
# cycles 4 1 2 1 2 3 2 3
X[g] 16 2 4 2 4 8 4 8
Two designs A and B are equivalent under a group G of permutations if there is an element Φ ∈ G: Φ(A) = B, that is, they are in the same orbit of G. Therefore, the number of nonequivalent designs
under G is the number of orbits of designs under G.
k = [Burnside’s Theorem] $\frac{1}{|G|}\sum_{g∈G}|X_g|$=[|X[g]| = 2^a where 2 = |{B, W}|. a = number of cycles in the cycle decomposition of each permutation because vertices in the same cycle need
to share the same color if the square was to be fixed.] = $\frac{1}{8}(16+2+4+2+4+8+4+8)=6$ (Figure 4)
• Therefore, if we could use four colors, the result would be k = [Burnside’s Theorem] $\frac{1}{|G|}\sum_{g∈G}|X_g|$=[|X[g]| = 4^a where a = number of cycles in the cycle decomposition of each
permutation because vertices in the same cycle need to share the same color if the square was to be fixed.] $=\frac{1}{8}(4^4+4^1+4^2+4^1+4^2+4^3+4^2 +4^3) = 55$
• How many ways can a tetrahedron be labeled (Figure 5)? A[4] is the rotational symmetry group for the tetrahedron, |A[4]| = 12. It has the identity, three 2-2 cycles -each of which fixes no
vertex, (12)(34), (14)(23), (13)(24)-, and eight 3-cycles -each of which fixes one vertex, e.g., we have two rotations keeping 4 fixed (132), (123), and obviously there are six rotations more
which keeps vertices 1, 2, and 3 fixed.-. However, the only element that fixes any such labeling is the identity, and |X[id]| = 4!, so k = [Burnside’s Theorem, a = 4] $\frac{1}{|G|}\sum_{g∈G}|X_g
|=\frac{1}{12}(4!+3·0+8·0) = 2$
• Given three colors, say red, green, and blue, how many ways can a cube be colored (counting rotations)? k = [Burnside’s Theorem, a = 4] $\frac{1}{|G|}\sum_{g∈G}|X_g|$
1. First, we need to know |G|. We are going to use the Orbit-stabilizer theorem but on a simpler action, once that simply act on the cube’s faces. Let’s pick the top face as labeled it as “x”. x can
be rotated to any other face ⇒ |orb[G](x)| = 6. What are the symmetries that fix this face? It could only be the group of rotation symmetries around the perpendicular axis to this face: id, r
[90°], r[180°], and r[270°] ⇒ |stab[G](x)| = 4 ⇒ [The orbit-stabilizer theorem] |G| = |orb[G](x)|·|stab[G](x)| = 6·4 = 24 (Figure A).
2. Let’s consider |X[id]| = 3^6 where 3 is the number of colors available, and 6 faces of the cube.
3. Let’s consider a rotation about the vertical axis (Figure b). Let’s think of the rotation by 90° clockwise or counterclockwise, what does this rotation fix? Suppose that the face shown in b.1 is
red, the face on the left would be rotated to the red face, so obviously the face on the left has to be red in the first place (figure b.2). Using the same argument, all four lateral faces of the
cube must also share the same color, so there are 3^3 possibilities (3^2 because there are no restriction on the top and bottom faces and 3 for the shared color of the four lateral faces.)
Futhermore, there are three axes (figure b.3), and for each one of them, we can rotate 90° either clockwise or anticlockwise, so we need to count this 6 times.
4. If we consider the same three axis, we need to consider the 180° rotation, but in this case we have more choices, because there are no restriction on the top and bottom faces, but we can also
choose two different colors for two of the four lateral faces, therefore there are 3^4 possibilities (top + bottom + 2 laterals), and there are three of those 180° rotations.
5. Let’s take into account the eight rotations about the long diagonal by 120° (figure c). Faces around vertices where the rotation axis crosses need to be of the same color in order to be fixed
(figure c, the faces 1, 2 and 3 need to be of the same color, as well as 4, 5, and 6), so there are only 3^2 possibilities, one color for each vertex.
6. Finally, we need to consider the six rotations passing through each pair of opposites edges (Figure D). We can chose the color for three of the faces, so there are 3^3 possibilities.
k = [Burnside’s Theorem, a = 4] $\frac{1}{|G|}\sum_{g∈G}|X_g| = \frac{1}{24}(3^6+6·3^3+3·3^4+8·3^2+6·3^3)=\frac{1368}{24}=57.$
This content is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. This post relies heavily on the following resources, specially on
, Introduction to Galois Theory, Michael Penn, and Contemporary Abstract Algebra, Joseph, A. Gallian.
1. NPTEL-NOC IITM, Introduction to Galois Theory.
2. Algebra, Second Edition, by Michael Artin.
3. LibreTexts, Abstract and Geometric Algebra, Abstract Algebra: Theory and Applications (Judson).
4. Field and Galois Theory, by Patrick Morandi. Springer.
5. Michael Penn (Abstract Algebra), and MathMajor.
6. Contemporary Abstract Algebra, Joseph, A. Gallian.
7. Andrew Misseldine: College Algebra and Abstract Algebra.
|
{"url":"https://justtothepoint.com/algebra/symmetrycounting/","timestamp":"2024-11-03T16:25:35Z","content_type":"text/html","content_length":"28615","record_id":"<urn:uuid:729c1e2e-2c90-4677-afbe-02317733bae7>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00511.warc.gz"}
|
Again Hellman, Diffie-Hellman Problems
Have you wondered how cryptography is done? Or How is data transferred without letting the attackers know about the data?
This article is focused on one of the famous problems of cryptography, the Diffie-Hellman Problem. Cryptography is a way of storing data and transmitting it in a particular form. Diffie hellman is a
cryptography problem or algorithm that helps to transfer data with the help of two different keys.
Let’s dive into the article to learn more about the Diffie-Hellman Problem.
Cryptography Technique
Before we move forward with the Diffie-Hellman Problem, you need to know more about cryptography techniques. Cryptography techniques are of two types. They are as follows:-
Symmetric Key Cryptography
This type of cryptography uses a single secret key to encrypt and decrypt messages. There is a major issue with secret key exchange with this cryptography technique. It isn't easy to share between
the sender and the receiver. Attackers can intrude; while exchanging, they can know the secret key.
Asymmetric Key Cryptography
This type of cryptography uses different keys to encrypt and decrypt messages. Both sender and receiver use distinct keys. It is also known as public key cryptography. There are two famous asymmetric
encryption algorithms, RSA and Diffie-Hellman Key Exchange.
|
{"url":"https://www.naukri.com/code360/library/again-hellman-diffie-hellman-problems","timestamp":"2024-11-15T04:04:33Z","content_type":"text/html","content_length":"382926","record_id":"<urn:uuid:b587faf1-531b-43e3-b219-98b2e96dfcb8>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00715.warc.gz"}
|
Smooth Particle methods
Smooth Particle Applied Mechanics (SPAM) and Smoothed Particle Hydrodynamics (SPH) are numerical methods for solving the equations of continuum mechanics (the continuity equation, the equation of
motion, and the energy equation) with particles. This approach was originated by independently by Lucy ^[1] and by Gingold and Monaghan ^[2] in 1977 for astrophysical applications, and has since been
applied to many challenging problems in fluid and solid mechanics. The main advantage of smooth-particle methods is that the partial differential equations (continuity, motion, energy) are replaced
by ordinary differential equations (like molecular dynamics) describing the motion of particles. The particles can be of any size, from the microscopic to the astrophysical, and can obey any chosen
constitutive equation. The main disadvantages are the difficulties in treating sharp surfaces or interfaces with discrete particles and in avoiding the instabilities that can result for materials
under tension.
Some works have been able to link this technique and DPD, thus creating the "SDPD method" ^[3]. Other approach is to establish the volume of a particle as the volume of its Voronoi_cell.
1. ↑ L. B. Lucy "A numerical approach to the testing of the fission hypothesis", Astronomical Journal 82 pp. 1013-1024 (1977)
2. ↑ R. A. Gingold and J. J. Monaghan "Smoothed particle hydrodynamics: theory and application to non-spherical stars", Monthly Notices of the Royal Astronomical Society 181 pp. 375–389 (1977)
Related reading
• William Graham Hoover "Smooth Particle Applied Mechanics -The State of the Art", Advanced Series in Nonlinear Dynamics 25 World Scientific Publishing (2006) ISBN 978-981-270-002-5
|
{"url":"http://www.sklogwiki.org/SklogWiki/index.php/Smooth_Particle_methods","timestamp":"2024-11-08T02:26:28Z","content_type":"text/html","content_length":"21315","record_id":"<urn:uuid:fdf4ce82-14b6-4d35-b5eb-345467c22cbc>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00563.warc.gz"}
|
A review of different mascon approaches for regional gravity field modelling since 1968
Articles | Volume 13, issue 2
© Author(s) 2022. This work is distributed under the Creative Commons Attribution 4.0 License.
A review of different mascon approaches for regional gravity field modelling since 1968
The geodetic and geophysical literature shows an abundance of mascon approaches for modelling the gravity field of the Moon or Earth on global or regional scales. This article illustrates the
differences and similarities between the methods, which are labelled as mascon approaches by their authors.
Point mass mascons and planar disc mascons were developed for modelling the lunar gravity field from Doppler tracking data. These early models had to consider restrictions in observation geometry,
computational resources or geographical pre-knowledge, which influenced the implementation. Mascon approaches were later adapted and applied for the analysis of GRACE observations of the Earth's
gravity field, with the most recent methods based on the simple layer potential.
Differences among the methods relate to the geometry of the mascon patches and to the implementation of the gradient and potential for field analysis and synthesis. Most mascon approaches provide a
direct link between observation and mascon parameters – usually the surface density or the mass of an element – while some methods serve as a post-processing tool of spherical harmonic solutions.
This article provides a historical overview of the different mascon approaches and sketches their properties from a theoretical perspective.
Received: 20 Mar 2022 – Discussion started: 19 Apr 2022 – Revised: 26 Aug 2022 – Accepted: 29 Aug 2022 – Published: 29 Sep 2022
The gravity field of the Earth influences daily life in many ways. Local plumb lines define the upward direction, and several scientific instruments must be levelled before usage. Gravity
measurements provide corrections for geophysical height systems and allow the exploration of mineral deposits or caves. A gravity field model is also required for inertial navigation systems within
aeroplanes, ships or submarines. On a regional or global scale, mass redistributions – like the melting of glaciers or the changes in ground water – are reflected in the temporal variations of the
gravity field.
The gravity field of the Earth – or another celestial body – can be analysed on a global scale when enough orbiting satellites are tracked, even without special onboard instruments or ground-based
measurements. Satellite data provide a fast and homogeneous sampling in contrast to ground-based observations. The gravity field analysis establishes a connection between the tracking data and a set
of base functions to model the gravity field. An analysis by spherical harmonic functions is preferable for spherical bodies, as these base functions are the natural solution of the Laplace equation.
This set of base functions is also a complete and orthogonal system, which simplifies the analysis. However, a reasonable spherical harmonic analysis requires orbit observations with global and
homogeneous data distribution, and the model will have the same resolution everywhere.
Alternative localising base functions – e.g. point masses, spherical radial base functions, wavelets, Slepian function – are investigated and applied when the data distribution is irregular or when
more details in a region of interest shall be detected. This article will summarise the localising base functions, which are labelled as mascons and applied for the gravity field modelling of the
Earth and Moon.
Studies of the early lunar orbiters demonstrated significant orbit disturbances, which were traced back to an irregular lunar gravity field. The term “mascon” was introduced by Muller and Sjogren (
1968b) for describing these mass concentrations near the surface. In the same work, the name mascon was also introduced for the mathematical modelling of these mass concentrations. The concept was
applied for several years to the gravity field of the Moon, as the method was capable of the nearside restriction of data in contrast to spherical harmonic solutions. Interest in regional modelling
of the Earth's gravity field has increased significantly since the gravity field mapping mission GRACE (2002–2017) and its successor mission GRACE-FO (2018–present). The new observations enabled the
analysis of temporal variations caused by the redistribution of water and ice masses, where regional gravity field modelling overcomes the spherical harmonic solutions. Hence, the mascon concept has
been adapted and applied to Earth-related data by several research groups, either for regions of interest (Luthcke et al., 2008; Schrama et al., 2014; Ran et al., 2018) or on a global scale (Koch and
Witte, 1971; Andrews et al., 2015; Save et al., 2016).
A closer inspection of the publications, however, shows a variety of approaches under the label of mascons. This article will give a historical overview of the most prominent representatives and an
adequate definition of the mascon base functions. All different meanings of the investigated mascon approaches can be covered by the following definition: the term mascon either refers to the fact of
a significant gravitational anomaly within a celestial body or to a modelling of these anomalies by localising base functions. The localising base functions, which are labelled as mascons, include
point masses or discrete surface elements based on the simple layer potential. In the case of surface elements, the surface density is constant per mascon, and each localising base function is – in a
spectral representation, at least in the limit of high-degree expansion – a two-dimensional step function on the sphere. Methods of post processing are also labelled as a mascon approach when their
surface elements have a constant surface density. The shape of the mascon is not relevant for the definition, and the surface of the celestial body is not necessarily covered.
This publication is focused on the mascons' definitions and will ignore other processing steps, like background models, regional constraints or regularisation techniques. Each mascon approach is
presented by the associated gravitational potential of a single element and its gradient in the notation of representative literature. The properties of each approach are deduced from the theoretical
perspective only, but without treating programming experiments or numerical aspects. Such a detailed and comprehensive review of the different mascon approaches cannot be found in literature to our
In several previous articles, the authors quote only the original publication (Muller and Sjogren, 1968b) for the term mascon and restrict themselves in the following texts to a specific mascon
approach with its literature (e.g. Luthcke et al., 2008; Lemoine et al., 2007; Krogh, 2011; Andrews et al., 2015).
A point mass model and planar discs are applied for modelling the lunar gravity field in Wong et al. (1971), and both methods are considered as mascon approaches. In Watkins et al. (2015) and Save
et al. (2016), different mascon approaches are presented in the introductions, but without formulas or historical background. The authors of both articles classify three principal concepts:
• A.
mascons that have an analytical expression for the gravitational potential and explicit partial derivatives for the gradient;
• B.
mascons that are represented by a finite series of spherical harmonic functions and with partial derivatives derived via the chain rule;
• C.
mascons that serve as a post-processing tool to obtain regional mass changes from monthly spherical harmonic solutions.
An analogous classification with additional literature is presented by Abedini et al. (2021a), whose contribution is a numerical method for the gradient, which does not fit into the threefold scheme.
Many recent publications are related to the mascon solutions of either the NASA Goddard Space Flight Center (GSFC), the Jet Propulsion Laboratory (JPL) or the Center for Space Research (CSR). The
current JPL solutions are spherical cap mascons with analytical partial derivatives – i.e. category A in the classification – which are presented in Sect. 3.2. The mascon approaches of GSFC and CSR
are based on spherical harmonic functions, and they are a prominent example of type B (see Sect. 3.1). The mascon visualisation tool at the University of Colorado Boulder (https://ccar.colorado.edu/
grace/index.html, last access: 23 September 2022) enables an analysis and comparison of the latest solutions at JPL and GSFC for regions and generates time series per location.
2Mascons for modelling the lunar gravity field
2.1Mascons – mass anomalies close to the Moon's surface
The origin of the mascon concept is closely related to early models of the lunar gravity field.
In the space race between the Soviet Union (USSR) and the United States of America (USA), both nations wanted to send their representatives to the Moon first. The possible landing sites were
investigated by spacecrafts, starting with Luna 1 (USSR) in 1959, which missed the Moon due to navigation issues. The first man-made object on the Moon was the space probe Luna 2 (USSR) in a design
impact in 1959, followed by several missions by both nations. The spacecraft Luna 10 (USSR) and Lunar Orbiter 1 (USA) were the first artificial orbiters around the Moon in 1966 (Neal, 2008).
In both orbiter missions, the observed orbits differed after a short time from the predicted ones, which indicated either an incorrect or an incomplete model. As other error sources could be excluded
soon, the orbit disturbances were explained by significant mass anomalies below the Moon's surface. For these anomalies, the term “mass concentration” or “mascon” was introduced in Muller and Sjogren
All identified mascons on the nearside of the Moon cause relatively large and positive effects up to 200mGal, and their locations are one-to-one correlated to the major lunar maria, including
Imbrium, Serenitatis, Crisium, Humorum and Nectaris, which are visualised in Fig. 1 (Muller, 1972).
In particular for the Moon it is still common to call a large area with a significant positive mass anomaly a mascon independent of its mathematical representation (Floberghagen, 2001, p. 3). A
similar behaviour can be found, for example, in Barthelmes (1986, p. 35), where the mass anomalies of the Earth's gravity field are called mascons without using the phrase for the mathematical
modelling as well. This thesis focuses on the point mass modelling, but it also sketches simple layer potential with discrete surface elements, and both aspects will be identified as mascon
approaches in the current article.
2.2Point mass mascons
A quick modelling of the mass anomalies was important for the preparation of the latter space missions and the landing on the Moon. The chosen representation should
• consider the geographical pre-knowledge, i.e. the lunar maria as expected locations of the mass anomalies;
• consider the observation geometry, i.e. the fact that only the near side of the Moon allows observations from terrestrial ground stations;
• enable a direct relation between observables – Doppler tracking data in the case of the early lunar missions – and the estimated mascon parameters;
• remain simple due to limited computer resources.
The first three requirements are still important arguments for regional gravity field analysis – by mascons, wavelets, radial basis functions, Slepian functions, etc. – while the limited resources
implied a simple modelling of the anomalies by point masses.
The original papers (Muller and Sjogren, 1968a, b; Muller, 1972) lack a formula representation of the potential, but it is re-constructed, for example, in Floberghagen (2001, p. 19):
$\begin{array}{}\text{(1)}& V\left({\mathbit{r}}_{\mathrm{P}}\right)=GM\left(\frac{\mathrm{1}}{‖{\mathbit{r}}_{\mathrm{P}}‖}-\sum _{q=\mathrm{1}}^{Q}\frac{\mathit{\delta }{m}_{q}}{‖{\mathbit{r}}_{\
• V(r[P]): gravitational potential at the calculation point r[P];
• G: gravitational constant;
• M: mass of the celestial body;
• δm[q]: mass ratio between point masses and total mass M;
• r[q]: centres of the point masses.
Please note that, for consistency, all mascon quantities and their geometries are labelled in this article by means of an index (here: $q=\mathrm{1},\mathrm{2},\mathrm{\dots },Q$), and the
calculation point is labelled by the index P, both independent of the cited articles.
2.2.1Relation to the observation and the estimation process
A standard observation technique for space probes is the Doppler tracking, i.e. the change in frequency of a (re)-transmitted signal due to the relative motion of the spacecraft and the ground
station. The American missions use a few globally distributed stations, which meanwhile form the Deep Space Network of the NASA and which are operated by JPL today^1. The Doppler signal does not
provide complete information on the position or velocity; rather, it only projects the relative velocity between station and space probe onto the line of sight (Muller and Sjogren, 1968a; Weinwurm,
2004; Floberghagen, 2001).
The relationship between observation and mascon parameters requires a description of the change in the velocity – i.e. the acceleration – of the spacecraft caused by the gravitational potential.
Hence, it is sufficient to derive the gradient by ${\stackrel{\mathrm{¨}}{\mathbit{r}}}_{\mathrm{P}}=\mathrm{abla }V\left({\mathbit{r}}_{\mathrm{P}}\right)$ of the potential. For point mass models,
the gradient is calculated via
$\begin{array}{}\text{(2)}& \mathrm{abla }V\left({\mathbit{r}}_{\mathrm{P}}\right)=GM\left(-\frac{\mathrm{1}}{‖{\mathbit{r}}_{\mathrm{P}}{‖}^{\mathrm{3}}}{r}_{\mathrm{P}}+\sum _{q=\mathrm{1}}^{Q}\
frac{\mathit{\delta }{m}_{q}}{‖{\mathbit{r}}_{\mathrm{P}}-{\mathbit{r}}_{q}{‖}^{\mathrm{3}}}\left({\mathbit{r}}_{\mathrm{P}}-{\mathbit{r}}_{q}\right)\right).\end{array}$
To emphasise the special requirements and restrictions for the early lunar modelling, some details will be sketched here as well: according to Muller and Sjogren (1968a, b), residual observations are
created by removing the gravitational effect of a tri-axial Moon model and the acceleration of the Sun and other planets from the raw Doppler tracking data. Cubic polynomials are fitted to the
residuals for smoothing and estimation of accelerations. The accelerations are mapped to a constant orbit height of 100km altitude above the Moon's surface. The point masses are introduced directly
below the trajectory, with a depth of 50km below the surface, and their magnitudes are estimated. Additional information is given in Wong et al. (1971), such as the restriction to 100 parameters in
the estimation process due to implementation as well as the step-wise solutions in north–south bands, which usually cover 8 trajectories – with 48 elements in the estimated state vectors – and around
50 point masses below the tracks.
2.3Point mass mascons for irregular celestial bodies
Point mass mascons are also used in a different way to determine the gravity field of irregular celestial bodies. An example can be found in Chanut et al. (2015), where the gravity field of the
asteroid 216 – also known as Kleopatra – is predicted by polyhedron models and point mass mascons. In the case of asteroids, the irregular shape is observed by optical instruments first; only in rare
cases, orbiters investigate the gravity field directly. The observed shape is approximated by tetrahedrons with three corners on the surface and one in the geometrical centre of the asteroid (see
Fig. 2a). Point masses are located then, either one per tetrahedron in its geometrical centre or three in the centres of a geometrically sub-divided tetrahedron (see Fig. 2b). Assuming a constant
density of the asteroid and a known total mass, the mass per mascon is assigned to a value proportional to the surrounding volume, and the gravity field around the object can be predicted.
The point mass mascons have closed formulas for potential and explicit partial derivatives, which identify them as type A mascons in the threefold scheme.
The method is very easy to implement and requires only a few computational resources.
The gradient and all other field quantities are found without quadrature.
The model is singular for the potential and the gradient at the location of the point masses.
In the case of the lunar gravity field, assumptions are required for the location and depth below ground, as the Doppler tracking data and the observation geometry do not allow a detection of
this information from the measurement.
It should be pointed out that the modelling by point masses is applied, for example, in Baur and Sneeuw (2011) or in Barthelmes (1986), Claessens et al. (2001), and Lin et al. (2014) without being
labelled as a mascon approach by the authors, and that in the latter examples also the positions of the masses are estimated for regional studies of the Earth's gravity field. An iterative algorithm
is developed and justified via quasi-orthogonality in the sense of an inner product in Barthelmes (1986). To stabilise the optimisation process, the possible movement per point mass shall be
restricted in depth but also in radial or tangential direction with respect to an initial position.
2.4Planar disc mascons
As a response to Muller and Sjogren (1968b), an article by Conel and Holstrom (1968) presented a physical interpretation of the ringed lunar maria, according to which former impact craters are filled
afterwards by denser material. The authors experiment in the modelling of the mass anomalies with an arrangement of planar discs of finite thickness inside the impact craters and demonstrate a better
post-fit to the residual Doppler tracking data for Mare Serenitatis.
The obvious issues of point masses are discussed in Wong et al. (1971):
• the singularities of the model at the centres;
• bad fitting of the residual tracking data in the equatorial zone of the Moon due to the observation geometry;
• and combination issues with the spherical harmonic models (of very low degree and order at the time).
To overcome these problems, finite mass elements are suggested for modelling the gravitational anomalies, which also agrees with the physical ideas in Conel and Holstrom (1968).
For a simple and efficient solution, the finite mass elements are chosen to be oblique rotational ellipsoids, also known as spheroids (Wong et al., 1971). The gravitational potential of a spheroid
and its gradient are derived in Moulton (1960, pp. 119–132). On the one hand, the gravitational potential requires a series expression:
$\begin{array}{}\text{(3)}& V=\frac{M}{R}\left[\mathrm{1}+\frac{{b}^{\mathrm{2}}}{\mathrm{10}}\frac{{x}_{\mathrm{P}}^{\mathrm{2}}+{y}_{\mathrm{P}}^{\mathrm{2}}-\mathrm{2}{z}_{\mathrm{P}}^{\mathrm
• M: total mass of the spheroid (the gravitational constant is neglected in this exercise of the book);
• $R=‖{\mathbit{r}}_{\mathrm{P}}‖$: Euclidean distance between the spheroid's centre and the calculation point; ${\mathbit{r}}_{\mathrm{P}}=\left({x}_{\mathrm{P}},{y}_{\mathrm{P}},{z}_{\mathrm{P}}\
right)$ outside the body,
• b: semi-minor axis of the spheroid (and semi-major axis a);
• $e=\sqrt{\frac{{a}^{\mathrm{2}}-{b}^{\mathrm{2}}}{{a}^{\mathrm{2}}}}$ numerical eccentricity.
On the other hand, the gradient of the potential can be derived in a closed formula. In Wong et al. (1971), the semi-minor axis b is then squeezed to zero, which leads to the attraction of a circular
and planar disc. The article provides the gradient of a single disc in the form
$\begin{array}{}\text{(4)}& \begin{array}{rl}\stackrel{\mathrm{¨}}{x}& =-\frac{\mathrm{3}Gm}{\mathrm{2}{a}^{\mathrm{3}}}\left(\frac{-\sqrt{k}}{\left(\mathrm{1}+k\right)}+\mathrm{arcsin}\left(\frac{\
mathrm{1}}{\sqrt{\mathrm{1}+k}}\right)\right)x\\ \stackrel{\mathrm{¨}}{y}& =-\frac{\mathrm{3}Gm}{\mathrm{2}{a}^{\mathrm{3}}}\left(\frac{-\sqrt{k}}{\left(\mathrm{1}+k\right)}+\mathrm{arcsin}\left(\
frac{\mathrm{1}}{\sqrt{\mathrm{1}+k}}\right)\right)y\\ \stackrel{\mathrm{¨}}{z}& =\frac{\mathrm{3}Gm}{{a}^{\mathrm{3}}}\left(\frac{\mathrm{1}}{\sqrt{k}}-\mathrm{arcsin}\left(\frac{\mathrm{1}}{\sqrt{\
where k fulfils the quadratic equation
$\begin{array}{}\text{(5)}& {k}^{\mathrm{2}}{a}^{\mathrm{2}}+\left({a}^{\mathrm{2}}-\left({x}^{\mathrm{2}}+{y}^{\mathrm{2}}+{z}^{\mathrm{2}}\right)\right)k-{z}^{\mathrm{2}}=\mathrm{0}.\end{array}$
To bring the expressions of the gradient in Moulton (1960) and Wong et al. (1971) into an analogous form, the identity $\mathrm{arcsin}\mathit{\zeta }=\mathrm{arctan}\left(\mathit{\zeta }/\sqrt{\
mathrm{1}+{\mathit{\zeta }}^{\mathrm{2}}}\right)$ must be kept in mind. It also turns out that the value k is linked to the numerical eccentricity of the spheroid by the relation $e=\left(\mathrm{1}/
These planar disc mascons must be rotated and translated on the surface or close to it onto different locations, which is only implicitly indicated due to the definition of the coordinates $\left
(x,y,z\right)$ with respect to the centre of each disc (Wong et al., 1971).
The planar disc mascons have closed formulas for explicit partial derivatives of the potential, which identifies them as type A mascons.
The closed formulas do not require any integration for the gradient.
The surface elements all have the same shape, size and area for each mascon.
The potential of a mascon requires a series expansion.
The model is singular for the potential at the centre of the disc.
The surface elements do not cover the complete surface, even in a global analysis.
Most points within a disc are either above or below the spherical surface.
In fact, the planar disc mascons are a kind of a simple layer potential, but without implicit or explicit integration for the gradient.
3Simple layer potential and its regional subdivision
Modelling the gravitational potential by a simple layer was well known in geodesy and became popular around 1970.
The method can be applied to the complete potential or to a residual field after subtracting a reference field. The basic idea is to condensate the (remaining) in-homogeneous mass distribution onto
the surface 𝒮, either the topography itself or a simpler reference like a sphere or spheroid (Koch and Witte, 1971; Morrison, 1971).
The gravitational potential of the layer is given by
$\begin{array}{}\text{(6)}& V\left({\mathbit{r}}_{\mathrm{P}}\right)=G\underset{\mathcal{S}}{\iint }\frac{\mathit{\sigma }\left(\mathrm{\Omega }\right)}{\mathrm{\ell }\left(\mathrm{\Omega },{\mathbit
{r}}_{\mathrm{P}}\right)}\mathrm{d}\mathrm{\Omega },\end{array}$
• V(r[P]): gravitational potential at the calculation point r[P];
• σ(Ω): location-dependent surface density;
• G: gravitational constant;
• $\mathrm{\ell }\left(\mathrm{\Omega },{\mathbit{r}}_{\mathrm{P}}\right)=\sqrt{\left({x}_{\mathrm{P}}-x{\right)}^{\mathrm{2}}+\left({y}_{\mathrm{P}}-y{\right)}^{\mathrm{2}}+\left({z}_{\mathrm{P}}
Euclidean distance^2 between calculation point ${\mathbit{r}}_{\mathrm{P}}=\left({x}_{\mathrm{P}},{y}_{\mathrm{P}},{z}_{\mathrm{P}}\right)$ and all surface points $\mathbit{P}=\left(x,y,z\right)$
, with P∈𝒮;
• dΩ: the differential surface element.
In the mascon version of the simple layer potential, the surface 𝒮 is sub-divided into smaller regions 𝒮[q] – which are called surface elements or patches in this article – where the density is
assumed to be constant. This leads to the mascon representation of the (residual) potential:
$\begin{array}{}\text{(7)}& {V}_{q}\left({\mathbit{r}}_{\mathrm{P}}\right)=G{\mathit{\sigma }}_{q}\underset{{\mathcal{S}}_{q}}{\iint }\frac{\mathrm{1}}{\mathrm{\ell }\left(\mathrm{\Omega },{\mathbit
{r}}_{\mathrm{P}}\right)}\mathrm{d}\mathrm{\Omega },\text{(8)}& V\left({\mathbit{r}}_{\mathrm{P}}\right)=\sum _{q=\mathrm{1}}^{Q}{V}_{q}\left({\mathbit{r}}_{\mathrm{P}}\right).\end{array}$
A linear combination Eq. (8) of all mascons – where the summation weights σ[q] are included in the potential V[q](r[P]) per mascon – generates the potential of the simple layer again. On the one
hand, it should be pointed out that the method is applied, for example, in Koch and Witte (1971) without being labelled as a mascon approach. On the other hand, all the following mascon approaches
are based on the simple layer potential with discrete surface elements and constant surface densities, which justifies the mascon label here.
3.1Lumped spherical harmonics as mascons
Solving the Laplace equation in spherical coordinates leads to the spherical harmonic functions as a natural basis for gravity field modelling. An adequate linear combination of spherical harmonic
functions can also be used to define localising base functions like the mascons in the spectral domain. Due to this combination over all degrees and orders, the result is sometimes labelled as the
“lumped spherical harmonic approach” (Klosko et al., 2009).
Firstly, the gravity field is decomposed into a static field and its temporal variations:
$\begin{array}{}\text{(9)}& V={V}_{\mathrm{0}}+{V}_{t}.\end{array}$
The static field and the mascons are represented by spherical harmonic synthesis. According to Heiskanen and Moritz (1967), Koch and Witte (1971) and Seeber (2003), the potential is given by
$\begin{array}{}\text{(10)}& \begin{array}{rl}{V}_{\mathrm{0}}\left({\mathit{\lambda }}_{\mathrm{P}},{\mathit{\theta }}_{\mathrm{P}},{r}_{\mathrm{P}}\right)& =\frac{GM}{r}\sum _{l=\mathrm{0}}^{L}{\
left(\frac{R}{r}\right)}^{l}\sum _{m=\mathrm{0}}^{l}{\stackrel{\mathrm{‾}}{P}}_{l,m}\left(\mathrm{cos}{\mathit{\theta }}_{\mathrm{P}}\right)\\ & ×\left({\stackrel{\mathrm{‾}}{C}}_{l,m}\mathrm{cos}m{\
mathit{\lambda }}_{\mathrm{P}}+{\stackrel{\mathrm{‾}}{S}}_{l,m}\mathrm{sin}m{\mathit{\lambda }}_{\mathrm{P}}\right),\end{array}\end{array}$
• ${V}_{\mathrm{0}}\left({\mathit{\lambda }}_{\mathrm{P}},{\mathit{\theta }}_{\mathrm{P}},{r}_{\mathrm{P}}\right)$: potential of the static field;
• $\left({\mathit{\lambda }}_{\mathrm{P}},{\mathit{\theta }}_{\mathrm{P}},{r}_{\mathrm{P}}\right)$: spherical coordinates of the evaluation point r[P], i.e. longitude λ[P], co-latitude θ[P] and
radius r[P];
• GM: product of gravitational constant G and the mass of the celestial body M;
• R: radius or semi-major axis of the spherical or ellipsoidal reference body;
• ${\stackrel{\mathrm{‾}}{P}}_{l,m}\left(\mathrm{cos}\mathit{\theta }\right)$: fully normalised Legendre functions;
• $\mathit{\left\{}{\stackrel{\mathrm{‾}}{C}}_{l,m},{\stackrel{\mathrm{‾}}{S}}_{l,m}\mathit{\right\}}$: fully normalised spherical harmonic coefficients, also known as Stokes coefficients.
The approach arose at GSFC when analysing the data of the GRACE mission, and it is presented in a sequence of articles (Rowlands et al., 2005; Lemoine et al., 2007; Klosko et al., 2009; Rowlands
et al., 2010; Luthcke et al., 2013).
The mascons are generated in the spectral domain by (time-dependent) delta Stokes coefficients or differential Stokes coefficients of a simple layer:
$\begin{array}{}\text{(11)}& \begin{array}{rl}\mathrm{\Delta }{\stackrel{\mathrm{‾}}{C}}_{l,m}^{q}\left(t\right)& =\frac{\left(\mathrm{1}+{k}_{\mathrm{l}}^{\prime }\right){R}^{\mathrm{2}}}{\left(\
mathrm{2}l+\mathrm{1}\right)M}{\mathit{\sigma }}_{q}\left(t\right)\underset{{\mathcal{S}}_{q}}{\iint }{\stackrel{\mathrm{‾}}{P}}_{l,m}\left(\mathrm{cos}\mathit{\theta }\right)\mathrm{cos}m\mathit{\
lambda }\mathrm{d}\mathrm{\Omega },\\ \mathrm{\Delta }{\stackrel{\mathrm{‾}}{S}}_{l,m}^{q}\left(t\right)& =\frac{\left(\mathrm{1}+{k}_{\mathrm{l}}^{\prime }\right){R}^{\mathrm{2}}}{\left(\mathrm{2}l+
\mathrm{1}\right)M}{\mathit{\sigma }}_{q}\left(t\right)\underset{{\mathcal{S}}_{q}}{\iint }{\stackrel{\mathrm{‾}}{P}}_{l,m}\left(\mathrm{cos}\mathit{\theta }\right)\mathrm{sin}m\mathit{\lambda }\
mathrm{d}\mathrm{\Omega },\end{array}\end{array}$
with the Love numbers ${k}_{\mathrm{l}}^{\prime }$ for considering the loading effects of the extra masses on the surface.
The mascon solutions of the JPL are published online (https://earth.gsfc.nasa.gov/geo/data/grace-mascons, last access: 23 September 2022); the mascon solution of the CSR can also be found online (
http://www.csr.utexas.edu/grace/, last access: 23 September 2022). As the formulas require standard techniques of geoscience, other groups are also working with these kind of mascons (e.g. Andrews
et al., 2015; Krogh, 2011).
The lumped spherical harmonic approach can be used for any (almost spherical) body, but the approach is introduced for analysing the temporal variations of Earth's gravity field due to the variable
water storage in particular. Taking into account that a uniform layer of 1cm fresh water within an area of 1m^2 has a mass of around 10kg, the density is re-written in Rowlands et al. (2010) and
Luthcke et al. (2013) as σ[q]=10H[q] – in Save et al. (2016) the factor σ[q]=10.25H[q] is used instead – to express the results in centimetres of equivalent water height. Each mascon is determined by
a spherical harmonic synthesis
$\begin{array}{}\text{(12)}& \begin{array}{rl}{H}_{q}\left({\mathbit{r}}_{\mathrm{P}},t\right)& =\frac{M}{\mathrm{40}\mathit{\pi }{R}^{\mathrm{2}}}\sum _{l=\mathrm{0}}^{L}\left(\frac{\mathrm{2}l+\
mathrm{1}}{\mathrm{1}+{k}_{\mathrm{l}}^{\prime }}\right)\sum _{m=\mathrm{0}}^{l}{\stackrel{\mathrm{‾}}{P}}_{l,m}\left(\mathrm{cos}{\mathit{\theta }}_{\mathrm{P}}\right)\\ & ×\left(\mathrm{\Delta }{\
stackrel{\mathrm{‾}}{C}}_{l,m}^{q}\left(t\right)\mathrm{cos}m{\mathit{\lambda }}_{\mathrm{P}}+\mathrm{\Delta }{\stackrel{\mathrm{‾}}{S}}_{l,m}^{q}\left(t\right)\mathrm{sin}m{\mathit{\lambda }}_{\
on the spherical surface r=R and with the upward continuation term $\left(R/r{\right)}^{l}$ in the synthesis formula, if necessary.
If the maximum degree L of the expansion is large enough, expression Eq. (12) forms a “two-dimensional step function” on the sphere 𝒮 (see Fig. 3), with
A straightforward sub-division of a sphere is given by a longitude–latitude grid, i.e. all boundaries are either part of parallel circles or of meridians. In this case, the integrals Eq. (11) have
the differential surface element dΩ=cosθdθdλ of the unit sphere, and the integration can be obtained by recursion formulas of integrated Legendre functions.
The size and shape of the surface elements vary between the publications:
• Lemoine et al. (2007) and Rowlands et al. (2005) present a separation of the region of interest into surface elements of equal angles with the dimension $\mathrm{4}{}^{\circ }×\mathrm{4}{}^{\circ
}$, while Krogh (2011) defines patches of the dimensions $\mathrm{1.25}{}^{\circ }×\mathrm{1.5}{}^{\circ }$ and $\mathrm{1.5}{}^{\circ }×\mathrm{1.5}{}^{\circ }$.
• Equal areas within a longitude–latitude grid can be obtained by stretching or shrinking one of the angles dependent on latitude, which is discussed already in Morrison (1971) and applied in
experiments of Rowlands et al. (2010) and Andrews et al. (2015).
• In Klosko et al. (2009), the surface elements have – at least in the corresponding Fig. 4 – more complex boundaries. The lines are still along parallel circles and meridians, but they are
combined in such a way that the mascon patches fill irregular shapes of sub-basins within the Mississippi basin.
• In the CSR solution, the equal area per mascon is considered to be more relevant than a simple sub-division or a complete coverage of the sphere (Save et al., 2016). A geodesic grid with 40962
vertices is generated by iteration, and the mascon patches are located in the centres. The shapes of the patches are either hexagonal or pentagonal, and the elements cover approximately equal
areas of around 1^∘ diameter.
The regularisation techniques for equiangular patches are discussed in Abedini et al. (2021b) for other types of mascons, but the final recommendation to consider herein the area size should be
transferable to the lumped spherical harmonic approach as well.
Observation of GRACE
The mascons were introduced for analysing the Earth's gravity field in the mission GRACE (Gravity Recovery And Climate Experiment). The mission consisted of two identical satellites, which were
launched in 2002 in a cooperation between NASA/JPL and the German DLR. The satellites fell around the Earth in one common and almost circular orbit, with a low altitude of originally 500km in
height. The positions were quasi-permanently observed by GPS receivers with three antennas, and onboard accelerometers with three axes were measuring the combined influence of all non-gravitational
effects. The main observable was the variation of the distance between the two GRACE satellites, measured by microwaves in the K-band and Ka-band via a range-rate measurement system. The distance of
ρ≈250km between the satellite centres varied due to mass variations below, and the K-band provided a nominal accuracy of 10µm for the range ρ and 0.5µms^−1 for the range-rate $\stackrel{\mathrm
{˙}}{\mathit{\rho }}$ (Seeber, 2003; Tapley et al., 2004).
The orbit observations and the gravity field parameters can be linked in different ways – e.g. the variational equation, the energy balance approach, the short arc approach or the acceleration
approach – which are sketched, for example, in Liu (2008). The details are not in the focus of this work, but most methods require the gradient of the gravitational potential again.
Gradient of the lumped spherical harmonic mascons
For the range-rate $\stackrel{\mathrm{˙}}{\mathit{\rho }}$ in the lumped harmonic approach, the relationship is found in Luthcke et al. (2013) by the chain rule
$\begin{array}{}\text{(14)}& \frac{\partial \stackrel{\mathrm{˙}}{\mathit{\rho }}}{\partial {H}_{q}}=\sum _{l=\mathrm{0}}^{L}\sum _{m=\mathrm{0}}^{l}\frac{\partial \stackrel{\mathrm{˙}}{\mathit{\rho
}}}{\partial {\stackrel{\mathrm{‾}}{C}}_{l,m}}\frac{\partial \mathrm{\Delta }{\stackrel{\mathrm{‾}}{C}}_{l,m}^{q}}{\partial {H}_{q}}+\frac{\partial \stackrel{\mathrm{˙}}{\mathit{\rho }}}{\partial {\
stackrel{\mathrm{‾}}{S}}_{l,m}}\frac{\partial \mathrm{\Delta }{\stackrel{\mathrm{‾}}{S}}_{l,m}^{q}}{\partial {H}_{q}}\end{array}$
and analogously for range-acceleration $\stackrel{\mathrm{¨}}{\mathit{\rho }}$. The derivatives $\left\{\frac{\partial \mathrm{\Delta }{\stackrel{\mathrm{‾}}{C}}_{l,m}^{q}}{\partial {H}_{q}},\frac{\
partial \mathrm{\Delta }{\stackrel{\mathrm{‾}}{S}}_{l,m}^{q}}{\partial {H}_{q}}\right\}$ are straightforward, as the formulas Eq. (11) are linear with respect to the surface density σ[q] or the water
height H[q].
The lumped spherical harmonic approach is a representative of the type B mascons.
The method is very easy to implement after a previous analysis of the GRACE observations by spherical harmonic functions.
After the determination of all delta Stokes coefficients, all other field quantities can be calculated by standard methods of spherical harmonic synthesis.
The required integration Eq. (11) can be solved by well-known recursion formulas or by numerical quadrature.
A high degree L of expansion might be required for straight boundaries and constant values within the two-dimensional step functions.
3.2Spherical cap mascons
The planar disc mascon approach (in Sect. 2.4) is not satisfying from a geometrical view point, as most points within the element are either above or below the spherical surface. This can be avoided
by introducing spherical caps instead of planar discs. Monthly solutions in terms of spherical cap mascons are calculated at the JPL, and the details can be found in Watkins et al. (2015).
To reduce the effort of quadrature for the gradient expression, a local mascon coordinate system is introduced for each element by rotation, where the centre of the mascon is equal to the new North
Pole of the system. The new coordinates are the spherical distance γ and the azimuth ξ in the calculation point. The potential is still based on the simple layer theory leading to the integral
$\begin{array}{}\text{(15)}& {\stackrel{\mathrm{‾}}{V}}_{q}\left({\mathbit{r}}_{\mathrm{P}}\right)={R}^{\mathrm{2}}{\mathit{\sigma }}_{q}\underset{\mathrm{0}}{\overset{\mathit{\alpha }}{\int }}\
underset{\mathrm{0}}{\overset{\mathrm{2}\mathit{\pi }}{\int }}\frac{\mathrm{d}\mathit{\xi }\mathrm{sin}\mathit{\gamma }\mathrm{d}\mathit{\gamma }}{\mathrm{\ell }\left(\mathrm{\Omega },{\mathbit{r}}_
for the potential of a spherical cap, with
• ${\stackrel{\mathrm{‾}}{V}}_{q}\left({\mathbit{r}}_{\mathrm{P}}\right)$: gravitational potential per mascon in the local mascon coordinate system (the over-bar is introduced here to emphasise the
rotated coordinate system);
• σ[q]: product of the gravitational constant G and mass per mascon m[q] divided by the area of the spherical cap, i.e. ${\mathit{\sigma }}_{q}=\frac{G{m}_{q}}{\mathrm{2}\mathit{\pi }\left(\mathrm
{1}-\mathrm{cos}\mathit{\alpha }\right){R}^{\mathrm{2}}}$;
• R: radius of the spherical reference model;
• ℓ(Ω,r[P]): distance between calculation point ${\mathbit{r}}_{\mathrm{P}}=\left({x}_{\mathrm{P}},{y}_{\mathrm{P}},{z}_{\mathrm{P}}\right)$ and the points $\mathbit{P}=\left(x,y,z\right)$ in the
spherical cap (in the original paper, the Euclidean distance is noted down by d);
• α: radius of the spherical cap in radians.
Gradient of the spherical cap mascons
The gradient of the potential ${\stackrel{\mathrm{‾}}{V}}_{q}\left({r}_{\mathrm{P}}\right)$ is calculated per mascon and rotated at the end to the original coordinate system. The iterated integration
over the spherical distance and azimuth is reduced to a single integral by expressing the azimuthal component via elliptic integrals.
In the local mascon coordinate system, the gradient operator is of the form
$\begin{array}{}\text{(16)}& \mathrm{abla }{\stackrel{\mathrm{‾}}{V}}_{q}={\left(\begin{array}{ccc}\frac{\partial {\stackrel{\mathrm{‾}}{V}}_{q}}{\partial r},& \mathrm{0},& \frac{\mathrm{1}}{r}\frac
{\partial {\stackrel{\mathrm{‾}}{V}}_{q}}{\partial \mathit{\theta }}\end{array}\right)}^{\top },\end{array}$
where θ is the spherical distance between the calculation point and the mascon centre. The formulas of the gradient of a spherical cap are derived in an inter-office memorandum at the JPL
(R. Sunseri, unpublished data: Mass concentration modelled as a spherical cap 343R-11-00) – which is not available to us – and the results are quoted by Watkins et al. (2015):
$\begin{array}{}\text{(17)}& \begin{array}{rl}\frac{\partial {V}_{q}}{\partial r}& =-{\mathit{\sigma }}_{q}{t}^{\mathrm{3}}\left(\frac{{I}_{\mathrm{2}}}{t}-\mathrm{cos}\mathit{\theta }\cdot {I}_{\
mathrm{1}}-\mathrm{sin}\mathit{\theta }\cdot {I}_{\mathrm{3}}\right)\\ \frac{\mathrm{1}}{r}\frac{\partial {V}_{q}}{\partial \mathit{\theta }}& =-{\mathit{\sigma }}_{q}{t}^{\mathrm{3}}\left(\mathrm
{sin}\mathit{\theta }\cdot {I}_{\mathrm{1}}-\mathrm{cos}\mathit{\theta }\cdot {I}_{\mathrm{3}}\right),\end{array}\end{array}$
with the abbreviation $t=\frac{R}{r}$ and the three integrals $\mathit{\left\{}{I}_{\mathrm{1}},{I}_{\mathrm{2}},{I}_{\mathrm{3}}\mathit{\right\}}$. The solution of the later ones requires complete
elliptic integrals – first kind E(k) and second kind K(k) – and numerical integration in the spherical distance direction:
$\begin{array}{}\text{(18)}& \begin{array}{rl}{I}_{\mathrm{1}}& =\int \mathrm{sin}\mathit{\gamma }\mathrm{cos}\mathit{\gamma }\left[\frac{{m}^{\prime }}{\sqrt{{l}^{\prime }+\mathrm{1}}\left({l}^{\
prime }+\mathrm{1}\right)}E\left(k\right)\right]\mathrm{d}\mathit{\gamma }\\ {I}_{\mathrm{2}}& =\int \mathrm{sin}\mathit{\gamma }\left[\frac{{m}^{\prime }}{\sqrt{{l}^{\prime }+\mathrm{1}}\left({l}^{\
prime }+\mathrm{1}\right)}E\left(k\right)\right]\mathrm{d}\mathit{\gamma }\\ {I}_{\mathrm{3}}& =\int \mathrm{sin}\mathit{\gamma }\mathrm{cos}\mathit{\gamma }\left[\frac{{m}^{\prime }\left(E\left(k\
right)-\left(\mathrm{1}-{l}^{\prime }\right)K\left(k\right)\right)}{\sqrt{{l}^{\prime }+\mathrm{1}}\left({l}^{\prime }+\mathrm{1}\right){l}^{\prime }}\right]\mathrm{d}\mathit{\gamma },\end{array}\end
with the auxiliary expressions
$\begin{array}{}\text{(19)}& \begin{array}{rl}& n=\mathrm{1}+{t}^{\mathrm{2}}-\mathrm{2}t\mathrm{cos}\mathit{\theta }\mathrm{cos}\mathit{\gamma }\\ & {m}^{\prime }=\mathrm{4}/{n}^{\mathrm{3}/\mathrm
{2}}\\ & {l}^{\prime }=\mathrm{2}t\mathrm{sin}\mathit{\theta }\mathrm{sin}\mathit{\gamma }/n\\ & {k}^{\mathrm{2}}=\mathrm{2}{l}^{\prime }/\left({l}^{\prime }+\mathrm{1}\right).\end{array}\end{array}$
The mascon potential is calculated by quadrature, and analytical derivatives have been derived, which leads to a class A mascon in the threefold scheme.
The two-dimensional quadrature for the gradients are reduced to a one-dimensional integration.
The calculation takes place only in the spatial domain and avoids the truncation error of spherical harmonic synthesis.
The surface elements have all the same shape, size and area for each mascon.
The surface elements do not cover the complete surface, even in a global analysis.
The model is singular for the potential and the gradient at the location of the centre of the spherical cap.
A straightforward implementation of the formulas Eq. (19) also leads to an undefined expression when the calculation point is identical to the centre of the spherical cap. One finds then that t=1
and θ=0, and in consequence ${l}^{\prime }=\mathrm{0}/\mathrm{0}$. A solution might be given in the unavailable inter-office memorandum.
3.3Mascons via quadrature of the simple layer potential
To avoid truncation errors and aliasing into coefficients of lower degree and order via the spherical harmonic expansion Eq. (14), a complete numerical integration is suggested in Abedini et al. (
2021a, b). The potential is represented – in our notation of Sect. 3 – by the formula
$\begin{array}{}\text{(20)}& T\left({\mathbit{r}}_{\mathrm{P}}\right)=-G\underset{\mathcal{S}}{\iint }\frac{\mathit{\sigma }\left(\mathrm{\Omega }\right)}{\mathrm{\ell }\left(\mathrm{\Omega },{\
mathbit{r}}_{\mathrm{P}}\right)}\mathrm{d}\mathrm{\Omega }\end{array}$
and is evaluated by numerical quadrature when necessary. The extra minus sign was likely introduced by the authors due to non-geodetic literature, as physical textbooks often use the definition $\
stackrel{\mathrm{¨}}{\mathbit{r}}=-\mathrm{abla }V$.
The derivatives of the range-rate $\stackrel{\mathrm{˙}}{\mathit{\rho }}$ with respect to the surface density are quasi-decomposed by the chain rule
$\begin{array}{}\text{(21)}& \frac{\partial \stackrel{\mathrm{˙}}{\mathit{\rho }}}{\partial {\mathit{\sigma }}_{q}}=\frac{\partial \stackrel{\mathrm{˙}}{\mathit{\rho }}}{\partial \mathbit{X}}\frac{\
partial \mathbit{X}}{\partial {\mathit{\sigma }}_{q}}+\frac{\partial \stackrel{\mathrm{˙}}{\mathit{\rho }}}{\partial \stackrel{\mathrm{˙}}{\mathbit{X}}}\frac{\partial \stackrel{\mathrm{˙}}{\mathbit
{X}}}{\partial {\mathit{\sigma }}_{q}}\end{array}$
into geometrical components $\left\{\frac{\partial \stackrel{\mathrm{˙}}{\mathit{\rho }}}{\partial \mathbit{X}},\frac{\partial \stackrel{\mathrm{˙}}{\mathit{\rho }}}{\partial \stackrel{\mathrm{˙}}{\
mathbit{X}}}\right\}$ and dynamical components $\left\{\frac{\partial \mathbit{X}}{\partial {\mathit{\sigma }}_{q}},\frac{\partial \stackrel{\mathrm{˙}}{\mathbit{X}}}{\partial {\mathit{\sigma }}_{q}}
\right\}$, with
• $\mathbit{X}={\mathbit{X}}_{\mathrm{2}}-{\mathbit{X}}_{\mathrm{1}}$: difference vector between the satellites' centre positions;
• $\stackrel{\mathrm{˙}}{\mathbit{X}}={\stackrel{\mathrm{˙}}{\mathbit{X}}}_{\mathrm{2}}-{\stackrel{\mathrm{˙}}{\mathbit{X}}}_{\mathrm{1}}$: difference vector between the satellites' centre
As the range-rate $\stackrel{\mathrm{˙}}{\mathit{\rho }}$ is calculated by $\stackrel{\mathrm{˙}}{\mathit{\rho }}=\frac{{\mathbit{X}}^{\top }\stackrel{\mathrm{˙}}{\mathbit{X}}}{‖\mathbit{X}‖}$, the
geometrical components are known and can be differentiated with respect to positions and velocities.
The dynamical components are determined by the variational equation
$\begin{array}{}\text{(22)}& \stackrel{\mathrm{¨}}{\mathbit{\xi }}=\frac{{\partial }^{\mathrm{2}}}{\partial {t}^{\mathrm{2}}}\left\{\frac{\partial \mathbit{X}}{\partial {\mathit{\sigma }}_{q}}\right
\}={\mathrm{abla }}^{\mathrm{2}}U\left(\mathbit{X}\right)\frac{\partial \mathbit{X}}{\partial {\mathit{\sigma }}_{q}}+\frac{\partial \mathrm{abla }T}{\partial {\mathit{\sigma }}_{q}},\end{array}$
with $U=\frac{GM}{‖{\mathbit{r}}_{\mathrm{P}}‖}$ being the potential of the Kepler problem. The equation is solved similarly to an orbit integration, with the initial values ξ=0 and $\stackrel{\
mathrm{˙}}{\mathbit{\xi }}=\mathbf{0}$ for each arc, each satellite and each mascon. The integration error is limited by applying the method only to short arcs over the region of interest, e.g.
Greenland in Abedini et al. (2021a).
The approach does not fit into the threefold scheme.
The method avoids truncation errors and aliasing by integration in the spatial domain.
The surface elements cover the complete surface in a global analysis.
The potential and the gradient require numerical quadrature.
The variational equations lead to a high computational burden, which is already admitted in Abedini et al. (2021a).
4Mascons as a post-processing tool
Since the successful GRACE mission, it is also possible to observe the temporal variations of the gravity field. The standard output of these investigations are monthly solutions of spherical
harmonic coefficients, which are meanwhile complemented by mascon solutions in the same time span by several research centres.
The question arises whether it is possible to estimate local variations from the spherical harmonic solutions by post processing. This is of particular interest for the ice masses and glaciers in
Greenland, Antarctica and Alaska as well as for the highly variable water masses in the large water basins, which dominate the time-variable part of the gravity field.
The spherical harmonic functions have a global support, which contradicts a regional analysis. Another problem is the noise in the coefficients, which is overcome by filtering and de-striping
techniques at the cost of the spatial resolution. To estimate regional mass changes, it can be helpful to determine an adequate field quantity by spherical harmonic synthesis and to analyse this
newly generated signal by another base function with local support (Ran et al., 2018).
4.1Spherical cap mascon as a post-processing tool
Schrama et al. (2014) use the term mascon for post processing of a time series of Stokes coefficients $\left\{{\stackrel{\mathrm{‾}}{C}}_{l,m}\left(t\right),{\stackrel{\mathrm{‾}}{S}}_{l,m}\left(t\
right)\right\}$. The goal is the determination of local mass variations in the ice shields and glaciers based on a time series of spherical harmonic coefficients.
A long-term mean value $\left\{〈{\stackrel{\mathrm{‾}}{C}}_{l,m}〉,〈{\stackrel{\mathrm{‾}}{S}}_{l,m}〉\right\}$ per coefficient is calculated and subtracted, and a Gauß-filter ${{W}_{\mathrm{l}}}^
{\mathrm{G}}$ in the spectral domain is applied by multiplication with the Stokes coefficients. To represent the equivalent water height instead of the potential, further standard factors are
$\begin{array}{}\text{(23)}& \left\{\begin{array}{c}{c}_{l,m}^{\mathrm{w}}\left(t\right)\\ {s}_{l,m}^{\mathrm{w}}\left(t\right)\end{array}\right\}=\frac{{a}_{\mathrm{e}}{\mathit{\rho }}_{\mathrm{e}}\
left(\mathrm{2}l+\mathrm{1}\right)}{\mathrm{3}{\mathit{\rho }}_{\mathrm{w}}\left(\mathrm{1}+{k}_{\mathrm{l}}^{\prime }\right)}{{W}_{\mathrm{l}}}^{\mathrm{G}}\left\{\begin{array}{c}{\stackrel{\mathrm
{‾}}{C}}_{l,m}\left(t\right)-〈{\stackrel{\mathrm{‾}}{C}}_{l,m}〉\\ {\stackrel{\mathrm{‾}}{S}}_{l,m}\left(t\right)-〈{\stackrel{\mathrm{‾}}{S}}_{l,m}〉\end{array}\right\},\end{array}$
• a[e]: equatorial radius of the ellipsoidal Earth;
• ρ[e]: mean density of the Earth;
• ρ[w]: density of water;
• ${k}_{\mathrm{l}}^{\prime }$: Love numbers.
The spherical harmonic synthesis
$\begin{array}{}\text{(24)}& \begin{array}{rl}h\left({\mathit{\lambda }}_{\mathrm{P}},{\mathit{\theta }}_{\mathrm{P}},t\right)& =\sum _{l=\mathrm{0}}^{L}\sum _{m=\mathrm{0}}^{l}{\stackrel{\mathrm{‾}}
{P}}_{l,m}\left(\mathrm{cos}{\mathit{\theta }}_{\mathrm{P}}\right)\\ & ×\left({c}_{l,m}^{\mathrm{w}}\left(t\right)\mathrm{cos}m{\mathit{\lambda }}_{\mathrm{P}}+{s}_{l,m}^{\mathrm{w}}\left(t\right)\
mathrm{sin}m{\mathit{\lambda }}_{\mathrm{P}}\right)\end{array}\end{array}$
provides the mass variations with respect to a long-term mean on a spherical surface. The equivalent water height $h\left({\mathit{\lambda }}_{\mathrm{P}},{\mathit{\theta }}_{\mathrm{P}},t\right)$ is
then analysed by a set of localising base functions
$\begin{array}{}\text{(25)}& h\left({\mathit{\lambda }}_{\mathrm{P}},{\mathit{\theta }}_{\mathrm{P}},t\right)=\sum _{q=\mathrm{1}}^{Q}{\mathit{\alpha }}_{q}\left(t\right){\mathit{\beta }}_{q}\left({\
mathit{\psi }}_{q},L,R\right)\end{array}$
via least-squares estimation and the determination of the weights α[q](t). Each base function ${\mathit{\beta }}_{q}\left({\mathit{\psi }}_{k},L,R\right)$ has the form
$\begin{array}{}\text{(26)}& {\mathit{\beta }}_{q}=\sum _{l=\mathrm{0}}^{L}{\mathit{\gamma }}_{\mathrm{l}}\left(R\right){\stackrel{\mathrm{‾}}{P}}_{\mathrm{l}}\left(\mathrm{cos}{\mathit{\psi }}_{q}\
right)\text{(27)}& {\mathit{\gamma }}_{\mathrm{l}}\left(R\right)=\frac{\mathrm{1}}{\mathrm{2}}\underset{\mathrm{0}}{\overset{R}{\int }}{\stackrel{\mathrm{‾}}{P}}_{\mathrm{l}}\left(\mathrm{cos}\mathit
{\mu }\right)\mathrm{sin}\mathit{\mu }\mathrm{d}\mathit{\mu },\end{array}$
which is equivalent to a spherical cap with the radius R, the maximum expansion degree L in the spectral domain and the location (λ[q],θ[q]) for its centre.
Original GRACE data are not required, as the method is applied to the previous solution by spherical harmonics, which leads to type C mascons.
The estimation of the weights is a straightforward process via least-squares estimation.
The surface elements all have the same shape, size and area.
The required integration Eq. (27) can be solved by recursion formulas or numerical quadrature.
The effect of Gauß filtering and the temporal average on the solution's quality are difficult to predict; also, the chosen sampling in the spherical harmonic synthesis might have an effect on the
estimated masses.
The surface elements do not cover the complete surface, even in a global analysis.
4.2Point mass mascons as a post-processing tool
Ran (2017) and Ran et al. (2018) extend an idea of Baur and Sneeuw (2011) by combining point masses and the simple layer potential. The goal of the work is the estimation of mass variations over
Greenland based on spherical harmonic coefficients. The GRACE solutions are used to derive the radial component of the gradient but with loading compensation in orbit altitude:
$\begin{array}{}\text{(28)}& \begin{array}{rl}\mathit{\delta }g\left({\mathbit{r}}_{\mathrm{P}}\right)& =-\frac{\partial V}{\partial r}\\ & =-\frac{GM}{{r}_{\mathrm{P}}^{\mathrm{2}}}\sum _{l=\mathrm
{1}}^{L}\frac{l+\mathrm{1}}{\mathrm{1}+{k}_{\mathrm{l}}^{\prime }}{\left(\frac{a}{{r}_{\mathrm{P}}}\right)}^{l}\sum _{m=\mathrm{0}}^{l}{\stackrel{\mathrm{‾}}{P}}_{l,m}\left(\mathrm{cos}\mathit{\theta
}\right)\\ & ×\left(\mathrm{\Delta }{\stackrel{\mathrm{‾}}{C}}_{l,m}\mathrm{cos}m{\mathit{\lambda }}_{\mathrm{P}}+\mathrm{\Delta }{\stackrel{\mathrm{‾}}{S}}_{l,m}\mathrm{sin}m{\mathit{\lambda }}_{\
This signal is analysed – by least squares estimation of the surface densities ρ[q] – by the simple layer in the region of interest:
$\begin{array}{}\text{(29)}& \begin{array}{rl}\mathit{\delta }{q}_{\mathrm{P}}& =-\frac{\partial }{\partial r}\left\{G\sum _{q=\mathrm{1}}^{Q}{\mathit{\rho }}_{q}\iint \frac{\mathrm{d}s}{\mathrm{\ell
}\left(\mathrm{\Omega },{r}_{\mathrm{P}}\right)}\right\}=\\ & =-\frac{\partial }{\partial r}\left\{G\sum _{q=\mathrm{1}}^{Q}{\mathit{\rho }}_{q}{I}_{q,p}\right\}.\end{array}\end{array}$
The integral I[q,p] is approximated by quadrature, which evaluates the distances only in the nodes of a grid:
$\begin{array}{}\text{(30)}& {I}_{q,p}=\iint \frac{\mathrm{d}s}{\mathrm{\ell }\left(\mathrm{\Omega },{\mathbit{r}}_{\mathrm{P}}\right)}\approx \sum _{j=\mathrm{1}}^{{K}_{q}}{w}_{q,j}\frac{\mathrm{1}}
• ${w}_{q,j}={S}_{i}/{K}_{q}$: weighting of the evaluated points;
• ${l}_{q,j,p}$: distance between the nodes and the evaluation point;
• S[q]: the surface area of the mascon with the index q.
The Euclidean distance is expressed in spherical coordinates
${l}_{q,j,p}=\sqrt{{r}_{q,j}^{\mathrm{2}}+{r}_{p}^{\mathrm{2}}-\mathrm{2}{r}_{q,j}{r}_{p}\mathrm{cos}{\mathrm{\Psi }}_{q,j,p}},$
• ${r}_{p}=‖{\mathbit{r}}_{\mathrm{P}}‖$: distance of calculation point to the origin;
• ${r}_{q,j}=‖{\mathbit{r}}_{q,j}‖$: distance of nodes within the patch to the origin;
• ${\mathrm{\Psi }}_{q,j,p}$: angle between the vectors r[q,j] and r[P].
The observable of the study is then given by
$\begin{array}{}\text{(31)}& \mathit{\delta }{q}_{\mathrm{P}}\approx G\sum _{q=\mathrm{1}}^{Q}{\mathit{\rho }}_{q}\sum _{j=\mathrm{1}}^{{K}_{q}}{w}_{q,j}\frac{\left({r}_{q,j}-{r}_{p}\mathrm{cos}{\
mathrm{\Psi }}_{q,j,p}\right)}{{{l}_{q,j,p}}^{\mathrm{3}}}.\end{array}$
The method is applied to the previous solution by spherical harmonics, which leads to type C mascons.
The method is very easy to implement.
Integration per mascon element is replaced by a weighted sum of point masses located on a grid.
The model is singular for potential and radial derivatives at the location of the nodes.
Finding the weighting ${w}_{q,j}={S}_{i}/{K}_{q}$ might be challenging for irregularly shaped patches.
5Summary – mascons in gravity field modelling
Point mass models are an important tool for gravity field modelling due to their simplicity and efficiency. The point mass representation is used for celestial bodies with irregular shapes but also
for Earth or the Moon on regional and global scales. Point mass mascons are also a key aspect of converting spherical harmonic solutions into regional mass variations, which supports the
interpretation of geophysical processes.
Mascons represented by finite surface elements are based on the simple layer potential. These models form a subset of localising base functions for gravity field modelling. Without neighbourhood
conditions, a solution close to the ground generates a discontinuous field. The discontinuity problem is damped for higher evaluation altitude or small patches. The constant density per mascon
simplifies the interpretation of mass variations in comparison to other localising base functions (e.g. wavelets or Slepian functions), which vary within their region of interest. Planar disc and
spherical cap mascons are radial symmetric base functions, while the other mascon concepts allow for patches with arbitrary shapes. In particular, the shape can consider the geometry of water basins,
which reduces the leakage of signals in hydro-geodesy.
No data sets were used in this article.
The author has declared that there are no competing interests.
Publisher’s note: Copernicus Publications remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
The author is grateful to Nico Sneeuw at the Institute of Geodesy (University of Stuttgart) for encouraging and supporting the idea of a historical review.
This paper was edited by Hans Volkert and reviewed by two anonymous referees.
Abedini, A., Keller, W., and Amiri-Simkooei, A.: Estimation of surface density changes using a mascon method in GRACE-like missions, J. Earth Syst. Sci., 130, 26, https://doi.org/10.1007/
s12040-020-01535-5, 2021a.a, b, c, d, e
Abedini, A., Keller, W., and Amiri-Simkooei, A. R.: On the performance of equiangular mascon solution in GRACE-like missions, Ann. Geophys., 64, GD219, https://doi.org/10.4401/ag-8621, 2021b.a, b
Andrews, S. B., Moore, P., and King, M. A.: Mass change from GRACE: a simulated comparison of Level-1B analysis techniques, Geophys. J. Int., 200, 503–518, https://doi.org/10.1093/gji/ggu402, 2015.a
, b, c, d
Barthelmes, F.: Untersuchungen zur Approximation des äußeren Gravitationsfeldes der Erde durch Punktmassen mit optimierten Positionen, PhD thesis, Veröffentlichungen des Zentralinstituts für Physik
der Erde 92, Potsdam, 122 pp., https://gfzpublic.gfz-potsdam.de/pubman/item/item_236018_1/component/file_236017/barthelmes_diss1986.pdf (last access: 23 September 2022), 1986.a, b, c
Baur, O. and Sneeuw, N.: Assessing Greenland ice mass loss by means of point-mass modelling: a viable methodology, J. Geodesy, 85, 607–615, https://doi.org/10.1007/s00190-011-0463-1, 2011.a, b
Chanut, T. G. G., Aljbaae, S., and Carruba, V.: Mascon gravitation model using a shaped polyhedral source, Mon. Not. R. Astron. Soc., 450, 3742–3749, https://doi.org/10.1093/mnras/stv845, 2015.a, b
Claessens, S., Featherstone, W., and Barthelmes, F.: Experiences with Point-Mass Gravity Field Modelling in the Perth Region, Western Australia, Geomatics Res. Austr., 75, 53–86, http://
hdl.handle.net/20.500.11937/31745 (last access: 23 September 2022), 2001.a
Conel, J. E. and Holstrom, G. B.: Lunar Mascons: A Near-Surface Interpretation, Science, 162, 1403–1405, https://doi.org/10.1126/science.162.3860.1403, 1968.a, b
Floberghagen, R.: The Far Side – Lunar Gravimetry Into the Third Millenium, PhD thesis, Technische Universiteit Delft, 283 pp., ISBN 9090146938, 2001.a, b, c
Heiskanen, W. and Moritz, H.: Physical Geodesy, W. H. Freeman, San Francisco, California, 1967.a
Klosko, S., Rowlands, D. D., Luthcke, S. B., Lemoine, F. G., Chinn, D., and Rodell, M.: Evaluation and validation of mascon recovery using GRACE KBRR data with independent mass flux estimates in the
Mississippi Basin, J. Geodesy, 83, 817–827, https://doi.org/10.1007/s00190-009-0301-x, 2009.a, b, c, d
Koch, K.-R. and Witte, B. U.: The Earth's Gravity Field Represented by a Simple Layer Potential from Doppler Tracking of Satellites, Technical report, U.S: Department of Comerce, https://
repository.library.noaa.gov/view/noaa/26706 (last access: 23 September 2022), 1971.a, b, c, d
Krogh, P. E.: High resolution time-lapse gravity field from GRACE for hydrological modelling, PhD thesis, Technical University of Denmark, 112 pp., https://orbit.dtu.dk/en/publications/
high-resolution-time-lapse-gravity-field-from-grace-for-hydrologi (last access: 23 September 2022), 2011.a, b, c
Lemoine, F. G., Luthcke, S. B., Rowlands, D. D., Chinn, K., and Cox: The use of mascons to re-solve time-variable gravity from GRACE, in: Dynamic Planet, edited by: Tregoning, P. and Rizos, C., Vol.
130, International Association of Geodesy Symposia, Springer, Berlin Heidelberg, 231–236, ISBN 9783540493494, 2007.a, b, c
Lin, M., Denker, H., and Müller, J.: Regional gravity field modelling using free-positioned point masses, Stud. Geophys. Geod., 58, 207–226, https://doi.org/10.1007/s11200-013-1145-7, 2014.a
Liu, X.: Global gravity field recovery from satellite-to-satellite tracking data with the acceleration approach, PhD thesis, Delft, 226 pp., ISBN 9789061323096, 2008.a
Luthcke, S. B., Arendt, A., Rowlands, D. D., McCarthy, J. J., and Larsen, C. F.: Recent glacier mass changes in the Gulf of Alaska region from GRACE mascon solutions, J. Glaciol., 188, 767–777,
https://doi.org/10.3189/002214308787779933, 2008.a, b
Luthcke, S. B., Sabaka, T., Loomis, B., Arendt, A., McCarthy, J. J., and Camp, J.: Antarctica, Greenland and Gulf of Alaska land-ice evolution from an iterated GRACE global mascon solution, J.
Glaciol., 216, 613–631, https://doi.org/10.3189/2013JoG12J147, 2013.a, b, c
Morrison, F.: Algorithms for Computing the Geopotential Using a Simple-Layer Density Model, Technical Report, U.S, Department of Comerce, https://repository.library.noaa.gov/view/noaa/30813 (last
access: 23 September 2022), 1971.a, b
Moulton, F. R.: An Introduction to Celestial Mechanics, 2nd Edn., The Macmillian Company, New York, 1960.a, b
Muller, P. M.: Implication of the lunar mascon discovery, in: Proceedings of the American Philosophical Society, edited by: Society, A. P., Vol. 116, 362–364, https://www.jstor.org/stable/986067
(last access: 23 September 2022), 1972.a, b, c
Muller, P. M. and Sjogren, W. L.: Consistency of Lunar Orbiter Residuals With Trajectory and Local Gravity Effects, Technical Report, 32-1307, Jet Propulsion Laboratory, https://ntrs.nasa.gov/
citations/19680024573 (last access: 23 September 2022), 1968a.a, b, c
Muller, P. M. and Sjogren, W. L.: Mascons: Lunar Mass Concentrations, Science, 161, 680–684, https://doi.org/10.1126/science.161.3842.680, 1968b.a, b, c, d, e, f
Neal, C. R.: The moon 35 years after Apollo: What's left to learn?, Chem Erde, 69, 3–43, https://doi.org/10.1016/j.chemer.2008.07.002, 2008.a
Ran, J.: Analysis of mass variations in Greenland by a novel variant of the mascon approach, PhD thesis, Delft University of Technology, 129 pp., ISBN 9789492683649, 2017.a
Ran, J., Ditmar, P., Klees, R., and Farahani, H. H.: Statistically optimal estimation of Greenland Ice Sheet mass variations from GRACE monthly solutions using an improved mascon approach, J.
Geodesy, 92, 299–319, https://doi.org/10.1007/s00190-017-1063-5, 2018. a, b, c
Rowlands, D. D., Luthcke, S. B., Klosko, S. M., Lemoine, F. G. R., Chinn, D. S., McCarthy, J. J., Cox, C. M., and Anderson, O. B.: Resolving mass flux at high spatial and temporal resolution using
GRACE intersatellite measurements, Geophys. Res. Lett., 32, L04310, https://doi.org/10.1029/2004GL021908, 2005.a, b
Rowlands, D. D., Luthcke, S. B., McCarthy, J. J., Klosko, S. M., Chinn, D. S., Lemoine, F. G., Boy, J.-P., and Sabaka, T. J.: Global mass flux solutions from GRACE: A comparison of parameter
estimation strategies – Mass concentrations versus Stokes coefficients, J. Geophys. Res.-Sol., 115, B01403, https://doi.org/10.1029/2009JB006546, 2010.a, b, c
Save, H., Bettadpur, S. and Tapley, B. D.: High-resolution CSR GRACE RL05 mascons, J. Geophys. Res.-Sol., 121, 7546–7569, https://doi.org/10.1002/2016JB013007, 2016.a, b, c, d
Schrama, E. J. O., Wouters, B., and Rietbroek, R.: A mascon approach to assess ice sheet and glacier mass balances and their uncertainties from GRACE data, J. Geophys. Res.-Sol., 119, 6048–6066,
https://doi.org/10.1002/2013JB010923, 2014.a, b
Seeber, G.: Satellite Geodesy, 2nd Edn., Walter de Gruyter, Berlin, New York, ISBN 3110175495, 2003.a, b
Tapley, B. D., Bettadpur, S., Watkins, M., and Reigber, C.: The Gravity Recovery and Climate Experiment: Mission Overview and Early Results, Geophys. Res. Lett., 31, L09607, https://doi.org/10.1029/
2004GL019920, 2004.a
Watkins, M. M., Wiese, D. N., Yuan, D.-N., Boening, C., and Landerer, F. W.: Improved methods for observing Earth's time variable mass distribution with GRACE using spherical cap mascons, J. Geophys.
Res.-Sol., 120, 2648–2671, https://doi.org/10.1002/2014JB011547, 2015.a, b, c
Weinwurm, G.: Amalthea's Gravity Field and its Impact on a Spacecraft Trajectory, PhD thesis, Technische Universität Wien, 2004.a
Wong, L., Buechler, G., Downs, W., Sjogren, W., Muller, P., and Gottlieb, P.: A Surface-Layer Representation of the Lunar Gravitational Field, J. Geophys. Res., 76, 6220–6236, https://doi.org/10.1029
/JB076i026p06220, 1971.a, b, c, d, e, f, g
The equivalent system of the USSR is not discussed in the investigated material.
The unusual arguments of the distance expressions are introduced here for highlighting the dependency on two distinct point sets.
|
{"url":"https://hgss.copernicus.org/articles/13/205/2022/","timestamp":"2024-11-03T09:48:50Z","content_type":"text/html","content_length":"396528","record_id":"<urn:uuid:2896a70a-70fa-46a5-ad51-e416080169b7>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00209.warc.gz"}
|
Unity Physics and Making it Deterministic
Determinism of Unity’s physics engine has been a reoccurring topic within the community and yet I don’t feel like it’s been explored very much as people often condemn it because it uses floating
point math.
Many games need determinism for things like synchronous simulations across the network or small, elegant replays. Common knowledge so far has been that the Unity’s physics is not deterministic
because of floating point imprecision. According to this article, floats have accuracy of at least 6 digits which, I think, is more than enough precision for physics.
I never saw why the inputs and ouputs for physics couldn’t just be rounded to the significant digits as this would take out nearly all randomness caused by floating point imprecision. Furthermore,
certain games like RTS require simulation at very slow intervals - maybe 10 per second - so not only would there be less room for inaccuracies, but the rounding would also cause a less drastic change
on the simulation.
As of far, I can think of no reason why through the use of heavy rounding and a low simulation rate the physics can’t be made deterministic. I don’t think there are any random factors (besides
imprecision) in the PhysX or Box2D engines and the floats definitely have enough precision for smooth rounding. At the very least, I think the collision detection can be used and the impulses
calculated in a custom manner. This is much better than the alternative of having developers coding everything from scratch. Do you think this will work and if not, why?
Physics across different platforms and architectures, or on a single machine needing reproducible results (play backs), would seem to require either an authoritative server or fixed point
Or there is an alternative approach?
Read OP. In any case, what makes fixed point math so special over floating point math when both have sufficient degree of accuracy?
I made a test to see whether or not Unity PhysX can be made deterministic. Please check it out here if you’re interested in helping to solve this mystery.
I don’t think it would be an argument of the accuracy of the calculation, but instead, the reproducibility of the calculation. Or perhaps not?
When I have my Windows machine working again, I’ll check out the link.
It seems unintuitive that 1.1 + 1.1 won’t equal to 2.2 on all computers whereas 1.00000000001 + 1.00000000001 might not equal 2.00000000002 because there’s a lot more needed to be stored in just 4
bytes. That’s basically what rounding everything is doing - scaling down the calculations to more manageable numbers for the computers.
This was an interesting and relevant read:
An additional complication with true input-based playback for a full scene/match/whatever is the order of operations for the objects themselves. Lots of gameplay things (i.e. jump pads, boosts,
whatever), might need to apply forces to objects in the same order in your replay as in the original version to get the same result.
That kind of stuff requires determinism throughout the engine, which Unity isn’t well suited for, especially for object creation/deletion. If you create 10 pieces of physical shrapnel on an
explosion, they’re going to have be created in the same order so they handle collisions in the same order later, etc etc…
Are you saying calling Instantiate(Obj1) before Instantiate(Obj2) doesn’t necessarily mean it gets instantiated first?
No, but many events in Unity are ordered by either when they were loaded or by GUID or other essentially random elements. If you have a bunch of game logic scripts, and they’re all doing some kind of
distance check or whatever in Update/FixedUpdate, you need to make sure you control the order in order to provide fully deterministic playback.
You can actually set script execution order relative to each other (so ScoreManager fires before RocketExplosion or whatever): Unity - Manual: Script Execution Order settings
But even then, if you have 10 objects with RocketExplosion on them, you still need to make they all execute their Update/FixedUpdate in the same order deterministically in order for full
deterministic playback to work…
Forgive me for the ignorance, but why is this?
By the way, I’ve conducted more tests on different platforms - sadly, it’s not possible to make Unity’s physics deterministic. They already round positions to the 2nd decimal place and give you no
control over what happens in between the roundings.
It’s a good question!
I’m getting a little out of my depth with low-level stuff, but the very general answer is that there are lots of unordered data structures in use with engines. Something like HashSet at the C# level
is a good example of a fast lookup that can’t provide ordered results. It’s basically just an optimization tradeoff.
Most of these data structures are using an object’s HashCode, which can you can see in Unity via the debug Inspector (top-right menu toggle) or via:
Debug.Log("My hash code is:" + this.GetHashCode());
Reload the same scene, and that code changes for each object.
So even with float precision wrangled you’re still going to need to architect an engine so that multiple collisions in each frame are processed in the same order, your Update/FixedUpdate logic is in
the same order, and that you aren’t just using any random numbers that aren’t passed through some kind of seeded random number generator. It gets pretty deep.
Even at the physics level, truncating/rounding to a defined range is going to bubble up and degrade your actual physics simulation fidelity pretty badly. That kind of limited precision doesn’t seem
so bad if you imagine rounding a position every frame, but to do collisions deterministically you’re going to need to round everything at every step (calculating the angles of surfaces, the forces
involved, etc etc)…
Oh, that makes sense. Much appreciated for the explanation.
It’s rounded for performance reasons. Its much more important than extreme-precision.
Rounding floating point numbers to decimal places is not accurate because floating point numbers are not represented in decimal, they are represented in binary. So when you round a number like
1.0011043425546 to a decimal representation 1.001, that is NOT necessarily precisely represented as that exact decimal number. You get just as much inaccuracy when you round to decimal digits as you
do when you don’t round at all.
Also, there are many numbers used by the physics simulation that are very small, like in the range of 0.0000000000111 to 0.0000000000222, which can be represented accurately as 1.11E-10 to 2.22E-10,
so rounding them off to the nearest 0.0001 completely throws those values away. Rounding to decimal digits is NOT a valid approach to accurate numerical simulation.
What if the numbers are rounded to powers of 2? I.e. Math.Round(X*1024)/1024
No amount or method of rounding floating point numbers will yield deterministic answers.
Suppose two different machines perform a sequence of floating point operations with a correct mathematical answer of 1.5. Machine A gets the result 1.49999999999999999 while Machine B gets the result
1.500000000000001. Machine A rounds to 1, Machine B rounds to 2.
1 Like
|
{"url":"https://discussions.unity.com/t/unity-physics-and-making-it-deterministic/566236","timestamp":"2024-11-07T04:13:10Z","content_type":"text/html","content_length":"56273","record_id":"<urn:uuid:177e400c-64b5-49ab-82d8-530c28a582bb>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00674.warc.gz"}
|
Publications Scientifiques
Ci-dessous, vous trouverez les publications pour lesquelles je suis un des co-auteurs, dans l’ordre chronologique inverse de leur soumission/publication dans des revues avec comité de lecture (5
lettres et 13 articles). Les nouvelles publications sont annoncées après leur soumission sur arXiv.
[18] M. Mangeat^1, S. Chakraborty^1, A. Wysocki^1, and H. Rieger^1,2, Stationary particle currents in sedimenting active matter wetting a wall, Phys. Rev. E 109, 014616 (2024).
^1Center for Biophysics & Department for Theoretical Physics, Saarland University, D-66123 Saarbrücken, Germany.^2INM – Leibniz Institute for New Materials, Campus D2 2, D-66123 Saarbrücken, Germany.
doi:10.1103/PhysRevE.109.014616 - arXiv:2309.09714 - gitHub - movie1 - movie2a - movie2b - pdf
Résumé Recently it was predicted, on the basis of a lattice gas model, that scalar active matter in a gravitational field would rise against gravity up a confining wall or inside a thin capillary -
in spite of repulsive particle-wall interactions [Phys. Rev. Lett. 124, 048001 (2020)]. In this paper we confirm this prediction with sedimenting active Brownian particles (ABPs) in a box numerically
and elucidate the mechanism leading to the formation of a meniscus rising above the bulk of the sedimentation region. The height of the meniscus increases with the activity of the system,
algebraically with the Péclet number. The formation of the meniscus is determined by a stationary circular particle current, a vortex, centered at the base of the meniscus, whose size and strength
increase with the ABP activity. The origin of these vortices can be traced back to the confinement of the ABPs in a box: already the stationary state of ideal (non-interacting) ABPs without
gravitation displays circular currents that arrange in a highly symmetric way in the eight octants of the box. Gravitation distorts this vortex configuration downward, leaving two major vortices at
the two side walls, with a strong downward flow along the walls. Repulsive interactions between the ABPs change this situation only as soon as motility induced phase separation (MIPS) sets in and
forms a dense, sedimented liquid region at the bottom, which pushes the center of the vortex upwards towards the liquid-gas interface. Self-propelled particles therefore represent an impressive
realization of scalar active matter that forms stationary particle currents being able to perform visible work against gravity or any other external field, which we predict to be observable
experimentally in active colloids under gravitation.
[17] M. Karmakar^1, S. Chatterjee^2, M. Mangeat^2, H. Rieger^2,3, and R. Paul^1, Jamming and flocking in the restricted active Potts model, Phys. Rev. E 108, 014604 (2023).
^1School of Mathematical & Computational Sciences, Indian Association for the Cultivation of Science, Kolkata 700032, India.^2Center for Biophysics & Department for Theoretical Physics, Saarland
University, D-66123 Saarbrücken, Germany.^3INM – Leibniz Institute for New Materials, Campus D2 2, D-66123 Saarbrücken, Germany.
doi:10.1103/PhysRevE.108.014604 - arXiv:2212.10251 - gitHub - pdf
Résumé We study the active Potts model with either site occupancy restriction or on-site repulsion to explore jamming and kinetic arrest in a flocking model. The incorporation of such volume
exclusion features leads to a surprisingly rich variety of self-organized spatial patterns. While bands and lanes of moving particles commonly occur without or under weak volume exclusion, strong
volume exclusion along with low temperature, high activity, and large particle density facilitates traffic jams. Through several phase diagrams, we identify the phase boundaries separating the jammed
and free-flowing phases and study the transition between these phases which provide us with both qualitative and quantitative predictions of how jamming might be delayed or dissolved. We further
formulate and analyze a hydrodynamic theory for the restricted APM with that predicts various features of the microscopic model.
[16] S. Chatterjee^1, M. Mangeat^1, C.-U. Woo^2, H. Rieger^1,3, and J. D. Noh^2, Flocking of two unfriendly species: The two-species Vicsek model, Phys. Rev. E 107, 024607 (2023).
^1Center for Biophysics & Department for Theoretical Physics, Saarland University, D-66123 Saarbrücken, Germany.^2Department of Physics, University of Seoul, Seoul 02504, Korea.^3INM – Leibniz
Institute for New Materials, Campus D2 2, D-66123 Saarbrücken, Germany.
doi:10.1103/PhysRevE.107.024607 - arXiv:2211.10494 - gitHub - pdf
Résumé We consider the two-species Vicsek model (TSVM) consisting of two kinds of self-propelled particles, A and B, that tend to align with particles from the same species and to antialign with the
other. The model shows a flocking transition that is reminiscent of the original Vicsek model: it has a liquid-gas phase transition and displays micro-phase-separation in the coexistence region where
multiple dense liquid bands propagate in a gaseous background. The interesting features of the TSVM are the existence of two kinds of bands, one composed of mainly A particles and one mainly of B
particles, the appearance of two dynamical states in the coexistence region: the PF (parallel flocking) state in which all bands of the two species propagate in the same direction, and the APF
(antiparallel flocking) state in which the bands of species A and species B move in opposite directions. When PF and APF states exist in the low-density part of the coexistence region they perform
stochastic transitions from one to the other. The system size dependence of the transition frequency and dwell times show a pronounced crossover that is determined by the ratio of the band width and
the longitudinal system size. Our work paves the way for studying multispecies flocking models with heterogeneous alignment interactions.
[15] S. Chatterjee^1, M. Mangeat^1, and H. Rieger^1,2, Polar flocks with discretized directions: the active clock model approaching the Vicsek model, EPL 138, 41001 (2022).
^1Center for Biophysics & Department for Theoretical Physics, Saarland University, D-66123 Saarbrücken, Germany.^2INM – Leibniz Institute for New Materials, Campus D2 2, D-66123 Saarbrücken, Germany.
doi:10.1209/0295-5075/ac6e4b - arXiv:2203.01181 - gitHub - pdf
Résumé We consider the off-lattice two-dimensional $q$-state active clock model (ACM) as a natural discretization of the Vicsek model (VM) describing flocking. The ACM consists of particles able to
move in the plane in a discrete set of $q$ equidistant angular directions, as in the active Potts model (APM), with an alignment interaction inspired by the ferromagnetic equilibrium clock model. We
find that for a small number of directions, the flocking transition of the ACM has the same phenomenology as the APM, including macrophase separation and reorientation transition. For a larger number
of directions, the flocking transition in the ACM becomes equivalent to the one of the VM and displays microphase separation and only transverse bands, i.e. no re-orientation transition.
Concomitantly also the transition of the $q\to\infty$ limit of the ACM, the active XY model (AXYM), is in the same universality class as the VM. We also construct a coarse-grained hydrodynamic
description for the ACM and AXYM akin to the VM.
[14] A. Alexandre^1, M. Mangeat^2, T. Guérin^1, and D. S. Dean^1,3, How Stickiness Can Speed Up Diffusion in Confined Systems, Phys. Rev. Lett. 128, 210601 (2022).
^1Univ. Bordeaux, CNRS, Laboratoire Ondes et Matière d'Aquitaine (LOMA), UMR 5798, F-33405 Talence, France.^2Center for Biophysics & Department for Theoretical Physics, Saarland University, D-66123
Saarbrücken, Germany.^3Team MONC, INRIA Bordeaux Sud Ouest, CNRS UMR 5251, Bordeaux INP, Univ. Bordeaux, F-33400 Talence, France.
doi:10.1103/PhysRevLett.128.210601 - arXiv:2112.05532 - pdf
Résumé The paradigmatic model for heterogeneous media used in diffusion studies is built from reflecting obstacles and surfaces. It is well known that the crowding effect produced by these reflecting
surfaces slows the dispersion of Brownian tracers. Here, using a general adsorption desorption model with surface diffusion, we show analytically that making surfaces or obstacles attractive can
accelerate dispersion. In particular, we show that this enhancement of diffusion can exist even when the surface diffusion constant is smaller than that in the bulk. Even more remarkably, this
enhancement effect occurs when the effective diffusion constant, when restricted to surfaces only, is lower than the effective diffusivity with purely reflecting boundaries. We give analytical
formulas for this intriguing effect in periodic arrays of spheres as well as undulating microchannels. Our results are confirmed by numerical calculations and Monte Carlo simulations.
[13] M. Mangeat^1,2, T. Guérin^1, and D. S. Dean^1,3, Steady state of overdamped particles in the non-conservative force field of a simple non-linear model of optical trap, J. Stat. Mech. 2021,
113205 (2021).
^1Univ. Bordeaux, CNRS, Laboratoire Ondes et Matière d'Aquitaine (LOMA), UMR 5798, F-33405 Talence, France.^2Center for Biophysics & Department for Theoretical Physics, Saarland University, D-66123
Saarbrücken, Germany.^3Team MONC, INRIA Bordeaux Sud Ouest, CNRS UMR 5251, Bordeaux INP, Univ. Bordeaux, F-33400 Talence, France.
doi:10.1088/1742-5468/ac3907 - arXiv:2110.04362 - pdf
Résumé Optically trapped particles are often subject to a non-conservative scattering force arising from radiation pressure. In this paper we present an exact solution for the steady state statistics
of an overdamped Brownian particle subjected to a commonly used force field model for an optical trap. The model is the simplest of its kind that takes into account non-conservative forces. In
particular, we present exact results for certain marginals of the full three dimensional steady state probability distribution as well as results for the toroidal probability currents which are
present in the steady state, as well as for the circulation of theses currents. Our analytical results are confirmed by numerical solution of the steady state Fokker-Planck equation.
[12] M. Mangeat^1 and H. Rieger^1, Narrow escape problem in two-shell spherical domains, Phys. Rev. E 104, 044124 (2021).
^1Center for Biophysics & Department for Theoretical Physics, Saarland University, D-66123 Saarbrücken, Germany.
doi:10.1103/PhysRevE.104.044124 - arXiv:2104.13125 - gitHub - pdf
Résumé Intracellular transport in living cells is often spatially inhomogeneous with an accelerated effective diffusion close to the cell membrane and a ballistic motion away from the centrosome due
to active transport along actin filaments and microtubules, respectively. Recently it was reported that the mean first passage time (MFPT) for transport to a specific area on the cell membrane is
minimal for an optimal actin cortex width. In this paper we ask whether this optimization in a two-compartment domain can also be achieved by passive Brownian particles. We consider a Brownian motion
with different diffusion constants in the two shells and a potential barrier between the two and investigate the narrow escape problem by calculating the MFPT for Brownian particles to reach a small
window on the external boundary. In two and three dimensions, we derive asymptotic expressions for the MFPT in the thin cortex and small escape region limits confirmed by numerical calculations of
the MFPT using the finite element method and stochastic simulations. From this analytical and numeric analysis we finally extract the dependence of the MFPT on the ratio of diffusion constants, the
potential barrier height and the width of the outer shell. The first two are monotonous whereas the last one may have a minimum for a sufficiently attractive cortex, for which we propose an
analytical expression of the potential barrier height matching very well the numerical predictions.
[11] M. Mangeat^1, S. Chatterjee^2, R. Paul^2, and H. Rieger^1, Flocking with a q-fold discrete symmetry: Band-to-lane transition in the active Potts model, Phys. Rev. E 102, 042601 (2020).
^1Center for Biophysics & Department for Theoretical Physics, Saarland University, D-66123 Saarbrücken, Germany.^2School of Mathematical & Computational Sciences, Indian Association for the
Cultivation of Science, Kolkata 700032, India.
doi:10.1103/PhysRevE.102.042601 - arXiv:2007.14875 - gitHub - pdf
Résumé We study the $q$-state active Potts model (APM) on a two-dimensional lattice in which self-propelled particles have q internal states corresponding to the q directions of motion. A local
alignment rule inspired by the ferromagnetic $q$-state Potts model and self-propulsion via biased diffusion according to the internal particle states elicits collective motion at high densities and
low noise. We formulate a coarse-grained hydrodynamic theory with which we compute the phase diagrams of the APM for $q=4$ and $q=6$ and analyze the flocking dynamics in the coexistence region, where
the high-density (polar liquid) phase forms a fluctuating stripe of coherently moving particles on the background of the low-density (gas) phase. A reorientation transition of the phase-separated
profiles from transversal band motion to longitudinal lane formation is found, which is absent in the Vicsek model and the active Ising model. The origin of this reorientation transition is revealed
by a stability analysis: for large velocities the transverse diffusivity approaches zero and stabilizes lanes. Computer simulations corroborate the analytical predictions of the flocking and
reorientation transitions and validate the phase diagrams of the APM.
[10] S. Chatterjee^1, M. Mangeat^2, R. Paul^1, and H. Rieger^2, Flocking and reorientation transition in the 4-state active Potts model, EPL 130, 66001 (2020).
^1School of Mathematical & Computational Sciences, Indian Association for the Cultivation of Science, Kolkata 700032, India.^2Center for Biophysics & Department for Theoretical Physics, Saarland
University, D-66123 Saarbrücken, Germany.
doi:10.1209/0295-5075/130/66001 - arXiv:1911.13067 - gitHub - pdf
Résumé We study the active 4-state Potts model (APM) on the square lattice in which active particles have four internal states corresponding to the four directions of motion. A local alignment rule
inspired by the ferromagnetic 4-state Potts model and self-propulsion via biased diffusion according to the internal particle states leads to flocking at high densities and low noise. We compute the
phase diagram of the APM and explore the flocking dynamics in the region, in which the high-density (liquid) phase coexists with the low-density (gas) phase and forms a fluctuating band of coherently
moving particles. As a function of the particle self-propulsion velocity, a novel reorientation transition of the phase-separated profiles from transversal to longitudinal band motion is revealed,
which is absent in the Vicsek model and the active Ising model. We further construct a coarse-grained hydrodynamic description of the model which validates the results for the microscopic model.
[09] M. Mangeat^1, T. Guérin^1, and D. S. Dean^1, Effective diffusivity of Brownian particles in a two dimensional square lattice of hard disks, J. Chem. Phys. 152, 234109 (2020).
^1Univ. Bordeaux, CNRS, Laboratoire Ondes et Matière d'Aquitaine (LOMA), UMR 5798, F-33405 Talence, France.
doi:10.1063/5.0009095 - arXiv:2111.04354 - pdf
Résumé We revisit the classic problem of the effective diffusion constant of a Brownian particle in a square lattice of reflecting impenetrable hard disks. This diffusion constant is also related to
the effective conductivity of non-conducting and infinitely conductive disks in the same geometry. We show how a recently derived Green’s function for the periodic lattice can be exploited to derive
a series expansion of the diffusion constant in terms of the disk’s volume fraction φ. Second, we propose a variant of the Fick–Jacobs approximation to study the large volume fraction limit. This
combination of analytical results is shown to describe the behavior of the diffusion constant for all volume fractions.
[08] M. Mangeat^1 and H. Rieger^1, The narrow escape problem in a circular domain with radial piecewise constant diffusivity, J. Phys. A: Math. Theor. 52, 424002 (2019).
^1Center for Biophysics & Department for Theoretical Physics, Saarland University, D-66123 Saarbrücken, Germany.
doi:10.1088/1751-8121/ab4348 - arXiv:1906.06975 - gitHub - pdf
Résumé The stochastic motion of particles in living cells is often spatially inhomogeneous with a higher effective diffusivity in a region close to the cell boundary due to active transport along
actin filaments. As a first step to understand the consequence of the existence of two compartments with different diffusion constant for stochastic search problems we consider here a Brownian
particle in a circular domain with different diffusion constants in the inner and the outer shell. We focus on the narrow escape problem and compute the mean first passage time (MFPT) for Brownian
particles starting at some pre-defined position to find a small region on the outer reflecting boundary. For the annulus geometry we find that the MFPT can be minimized for a specific value of the
width of the outer shell. In contrast for the two-shell geometry we show that the MFPT depends monotonously on all model parameters, in particular on the outer shell width. Moreover we find that the
distance between the starting point and the narrow escape region which maximizes the MFPT depends discontinuously on the ratio between inner and outer diffusivity.
[07] M. Mangeat^1, Y. Amarouchene^1, Y. Louyer^1, T. Guérin^1, and D. S. Dean^1, Role of nonconservative scattering forces and damping on Brownian particles in optical traps, Phys. Rev. E 99, 052107
^1Univ. Bordeaux, CNRS, Laboratoire Ondes et Matière d'Aquitaine (LOMA), UMR 5798, F-33405 Talence, France.
doi:10.1103/PhysRevE.99.052107 - arXiv:1812.09188 - pdf
Résumé We consider a model of a particle trapped in a harmonic optical trap but with the addition of a nonconservative radiation induced force. This model is known to correctly describe
experimentally observed trapped particle statistics for a wide range of physical parameters, such as temperature and pressure. We theoretically analyze the effect of nonconservative force on the
underlying steady state distribution as well as the power spectrum for the particle position. We compute perturbatively the probability distribution of the resulting nonequilibrium steady states for
all dynamical regimes underdamped through to overdamped and give expressions for the associated currents in phase space (position and velocity). We also give the spectral density of the trapped
particle's position in all dynamical regimes and for any value of the nonconservative force. Signatures of the presence of nonconservative forces are shown to be particularly strong for the
underdamped regime at low frequencies.
[06] Y. Amarouchene^1, M. Mangeat^1, B. Vidal Montes^1, L. Ondic^2, T. Guérin^1, D. S. Dean^1, and Y. Louyer^1, Nonequilibrium Dynamics Induced by Scattering Forces for Optically Trapped
Nanoparticles in Strongly Inertial Regimes, Phys. Rev. Lett. 122, 183901 (2019).
^1Univ. Bordeaux, CNRS, Laboratoire Ondes et Matière d'Aquitaine (LOMA), UMR 5798, F-33405 Talence, France.^2Institute of Physics, Academy of Sciences of the Czech Republic, CZ-162 00 Prague, Czech
doi:10.1103/PhysRevLett.122.183901 - arXiv:1812.06804 - pdf
Résumé The forces acting on optically trapped particles are commonly assumed to be conservative. Nonconservative scattering forces induce toroidal currents in overdamped liquid environments, with
negligible effects on position fluctuations. However, their impact in the underdamped regime remains unexplored. Here, we study the effect of nonconservative scattering forces on the underdamped
nonlinear dynamics of trapped nanoparticles at various air pressures. These forces induce significant low-frequency position fluctuations along the optical axis and the emergence of toroidal currents
in both position and velocity variables. Our experimental and theoretical results provide fundamental insights into the functioning of optical tweezers and a means for investigating nonequilibrium
steady states induced by nonconservative forces.
[PhD] M. Mangeat^1, De la dispersion aux vortex browniens dans des systèmes hors-équilibres confinés, Thèse de doctorat, Université de Bordeaux (soutenue le 25 Septembre 2018).
^1Univ. Bordeaux, CNRS, Laboratoire Ondes et Matière d'Aquitaine (LOMA), UMR 5798, F-33405 Talence, France.
Résumé Cette thèse vise à caractériser la dynamique stochastique hors-équilibre de particules browniennes sous l’effet de confinement. Ce confinement est appliqué ici par des potentiels attractifs ou
des frontières imperméables créant des barrières entropiques. Dans un premier temps, nous regardons la dispersion de particules sans interactions dans les milieux hétérogènes. Un nuage de particules
browniennes s’étale au cours du temps sans atteindre la distribution d’équilibre de Boltzmann, et son étalement est alors caractérisé par une diffusivité effective inférieure à la diffusivité
microscopique. Dans un premier chapitre, nous nous intéressons au lien entre la géométrie de confinement et la dispersion dans le cas particulier des microcanaux périodiques. Pour cela, nous
calculons la diffusivité effective sans hypothèse de réduction de dimensionnalité, contrairement à l’approche standard dite de Fick-Jacobs. Une classification des différents régimes de dispersion est
alors réalisée, pour toute géométrie autant pour les canaux continus que discontinus. Dans un second chapitre, nous étendons cette analyse à la dispersion dans les réseaux périodiques d’obstacles
sphériques attractifs à courte portée. La présence d’un potentiel attractif peut, de manière surprenante, augmenter la dispersion. Nous quantifions cet effet dans le régime dilué, et montrons alors
son optimisation pour plusieurs potentiels ainsi que pour une diffusion médiée par la surface des sphères. Ensuite, nous étudions la dynamique stochastique de particules browniennes dans un piège
optique en présence d’une force non conservative créée par la pression de radiation du laser. L’expression perturbative des courants stationnaires, décrivant les vortex browniens, est dérivée pour
les basses pressions en conservant le terme inertiel dans l’équation de Langevin sous-amortie. L’expression de la densité spectrale est également calculée permettant d’observer les anisotropies du
piège et les effets de la force non conservative. La plupart des expressions analytiques obtenues durant cette thèse sont asymptotiquement exactes et vérifiées par des analyses numériques basées sur
l’intégration de l’équation de Langevin ou la résolution d’équation aux dérivées partielles.
[05] M. Mangeat^1, T. Guérin^1, and D. S. Dean^1, Dispersion in two-dimensional periodic channels with discontinuous profiles, J. Chem. Phys. 149, 124105 (2018).
^1Univ. Bordeaux, CNRS, Laboratoire Ondes et Matière d'Aquitaine (LOMA), UMR 5798, F-33405 Talence, France.
doi:10.1063/1.5045183 - arXiv:1807.05366 - pdf
Résumé The effective diffusivity of Brownian tracer particles confined in periodic micro-channels is smaller than the microscopic diffusivity due to entropic trapping. Here, we study diffusion in
two-dimensional periodic channels whose cross section presents singular points, such as abrupt changes of radius or the presence of thin walls, with openings, delimiting periodic compartments
composing the channel. Dispersion in such systems is analyzed using the Fick-Jacobs (FJ) approximation. This approximation assumes a much faster equilibration in the lateral than in the axial
direction, along which the dispersion is measured. If the characteristic width a of the channel is much smaller than the period L of the channel, i.e., $\varepsilon = a/L$ is small, this assumption
is clearly valid for Brownian particles. For discontinuous channels, the FJ approximation is only valid at the lowest order in $\varepsilon$ and provides a rough, though on occasions rather accurate,
estimate of the effective diffusivity. Here we provide formulas for the effective diffusivity in discontinuous channels that are asymptotically exact at the next-to-leading order in $\varepsilon$.
Each discontinuity leads to a reduction of the effective diffusivity. We show that our theory is consistent with the picture of effective trapping rates associated with each discontinuity, for which
our theory provides explicit and asymptotically exact formulas. Our analytical predictions are confirmed by numerical analysis. Our results provide a precise quantification of the kinetic entropic
barriers associated with profile singularities.
[04] M. Mangeat^1, T. Guérin^1, and D. S. Dean^1, Dispersion in two dimensional channels—the Fick–Jacobs approximation revisited, J. Stat. Mech. 2017, 123205 (2017).
^1Univ. Bordeaux, CNRS, Laboratoire Ondes et Matière d'Aquitaine (LOMA), UMR 5798, F-33405 Talence, France.
doi:10.1088/1742-5468/aa9bb5 - arXiv:1710.02699 - pdf
Résumé We examine the dispersion of Brownian particles in a symmetric two dimensional channel, this classical problem has been widely studied in the literature using the so called Fick–Jacobs’
approximation and its various improvements. Most studies rely on the reduction to an effective one dimensional diffusion equation, here we derive an explicit formula for the diffusion constant which
avoids this reduction. Using this formula the effective diffusion constant can be evaluated numerically without resorting to Brownian simulations. In addition, a perturbation theory can be developed
in $\varepsilon = h_0/L$ where $h_0$ is the characteristic channel height and $L$ the period. This perturbation theory confirms the results of Kalinay and Percus (2006 Phys. Rev . E 74 041203), based
on the reduction, to one dimensional diffusion are exact at least to $\mathcal O(\varepsilon^6)$ . Furthermore, we show how the Kalinay and Percus pseudo-linear approximation can be straightforwardly
recovered. The approach proposed here can also be exploited to yield exact results in the limit $\varepsilon \to \infty$ , we show that here the diffusion constant remains finite and show how the
result can be obtained with a simple physical argument. Moreover, we show that the correction to the effective diffusion constant is of order $1/\varepsilon$ and remarkably has some universal
characteristics. Numerically we compare the analytic results obtained with exact numerical calculations for a number of interesting channel geometries.
[03] M. Mangeat^1, T. Guérin^1, and D. S. Dean^1, Geometry controlled dispersion in periodic corrugated channels, EPL 118, 40004 (2017).
^1Univ. Bordeaux, CNRS, Laboratoire Ondes et Matière d'Aquitaine (LOMA), UMR 5798, F-33405 Talence, France.
doi:10.1209/0295-5075/118/40004 - arXiv:1709.03722 - pdf
Résumé The effective diffusivity $D_e$ of tracer particles diffusing in periodically corrugated axisymmetric two- and three-dimensional channels is studied. The majority of the previous studies of
this class of problems are based on perturbative analyses about narrow channels, where the problem can be reduced to an effectively one-dimensional one. Here we show how to analyze this class of
problems using a much more general approach which even includes the limit of infinitely wide channels. Using the narrow- and wide-channel asymptotics, we provide a Padé approximant scheme that is
able to describe the dispersion properties of a wide class of channels. Furthermore, we systematically identify all the exact asymptotic scaling regimes of $D_e$ and the accompanying physical
mechanisms that control dispersion, clarifying the distinction between smooth channels and compartmentalized ones, and identifying the regimes in which $D_e$ can be linked to first passage problems.
[02] X. Zhou^1, R. Zhao^1, K. Schwarz^2, M. Mangeat^2, E. C. Schwarz^1, M. Hamed^3,4, I. Bogeski^1, V. Helms^3, H. Rieger^2, and B. Qu^1, Bystander cells enhance NK cytotoxic efficiency by reducing
search time, Sci. Rep 7, 44357 (2017).
^1Biophysics, Center for Integrative Physiology and Molecular Medicine, School of Medicine, Saarland University, D-66421 Homburg, Germany.^2Department of Theoretical Physics, Saarland University,
D-66123 Saarbrücken, Germany.^3Center for Bioinformatics, Saarland University, D-66041 Saarbrücken, Germany.^4Institute for Biostatistics and Informatics in Medicine and Ageing Research, Rostock
University Medical Center, D-18057 Rostock, Germany.
doi:10.1038/srep44357 - pdf
Résumé Natural killer (NK) cells play a central role during innate immune responses by eliminating pathogen-infected or tumorigenic cells. In the microenvironment, NK cells encounter not only target
cells but also other cell types including non-target bystander cells. The impact of bystander cells on NK killing efficiency is, however, still elusive. In this study we show that the presence of
bystander cells, such as P815, monocytes or HUVEC, enhances NK killing efficiency. With bystander cells present, the velocity and persistence of NK cells were increased, whereas the degranulation of
lytic granules remained unchanged. Bystander cell-derived $H_2O_2$ was found to mediate the acceleration of NK cell migration. Using mathematical diffusion models, we confirm that local acceleration
of NK cells in the vicinity of bystander cells reduces their search time to locate target cells. In addition, we found that integrin $\beta$ chains ($\beta_1$, $\beta_2$ and $\beta_7$) on NK cells
are required for bystander-enhanced NK migration persistence. In conclusion, we show that acceleration of NK cell migration in the vicinity of $H_2O_2$-producing bystander cells reduces target cell
search time and enhances NK killing efficiency.
[01] M. Mangeat^1,2 and F. Zamponi^1, Quantitative approximation schemes for glasses, Phys. Rev. E 93, 012609 (2016).
^1Laboratoire de Physique Théorique (LPT), École Normale Supérieure, UMR 8549 CNRS, 24 Rue Lhomond, F-75005 Paris, France.^2Master ICFP, Département de Physique, École Normale Supérieure, 24 Rue
Lhomond, F-75005 Paris, France.
doi:10.1103/PhysRevE.93.012609 - arXiv:1510.03808 - pdf
Résumé By means of a systematic expansion around the infinite-dimensional solution, we obtain an approximation scheme to compute properties of glasses in low dimensions. The resulting equations take
as input the thermodynamic and structural properties of the equilibrium liquid, and from this they allow one to compute properties of the glass. They are therefore similar in spirit to the Mode
Coupling approximation scheme. Our scheme becomes exact, by construction, in dimension $d \to \infty$, and it can be improved systematically by adding more terms in the expansion.
|
{"url":"https://mangeatm.fr/Publications/","timestamp":"2024-11-03T22:30:16Z","content_type":"text/html","content_length":"45460","record_id":"<urn:uuid:6f9fb882-debb-4d69-8d4f-442f84f6369b>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00109.warc.gz"}
|
Nash equilibrium - Wikiwand
In game theory, the Nash equilibrium is the most commonly-used solution concept for non-cooperative games. A Nash equilibrium is a situation where no player could gain by changing their own strategy
(holding all other players' strategies fixed).^[1] The idea of Nash equilibrium dates back to the time of Cournot, who in 1838 applied it to his model of competition in an oligopoly.^[2]
This article
needs additional citations for verification
(June 2023)
If each player has chosen a strategy – an action plan based on what has happened so far in the game – and no one can increase one's own expected payoff by changing one's strategy while the other
players keep theirs unchanged, then the current set of strategy choices constitutes a Nash equilibrium.
If two players Alice and Bob choose strategies A and B, (A, B) is a Nash equilibrium if Alice has no other strategy available that does better than A at maximizing her payoff in response to Bob
choosing B, and Bob has no other strategy available that does better than B at maximizing his payoff in response to Alice choosing A. In a game in which Carol and Dan are also players, (A, B, C, D)
is a Nash equilibrium if A is Alice's best response to (B, C, D), B is Bob's best response to (A, C, D), and so forth.
Nash showed that there is a Nash equilibrium, possibly in mixed strategies, for every finite game.^[3]
Game theorists use Nash equilibrium to analyze the outcome of the strategic interaction of several decision makers. In a strategic interaction, the outcome for each decision-maker depends on the
decisions of the others as well as their own. The simple insight underlying Nash's idea is that one cannot predict the choices of multiple decision makers if one analyzes those decisions in
isolation. Instead, one must ask what each player would do taking into account what the player expects the others to do. Nash equilibrium requires that one's choices be consistent: no players wish to
undo their decision given what the others are deciding.
The concept has been used to analyze hostile situations such as wars and arms races^[4] (see prisoner's dilemma), and also how conflict may be mitigated by repeated interaction (see tit-for-tat). It
has also been used to study to what extent people with different preferences can cooperate (see battle of the sexes), and whether they will take risks to achieve a cooperative outcome (see stag hunt
). It has been used to study the adoption of technical standards, and also the occurrence of bank runs and currency crises (see coordination game). Other applications include traffic flow (see
Wardrop's principle), how to organize auctions (see auction theory), the outcome of efforts exerted by multiple parties in the education process,^[5] regulatory legislation such as environmental
regulations (see tragedy of the commons),^[6] natural resource management,^[7] analysing strategies in marketing,^[8] penalty kicks in football (see matching pennies),^[9] robot navigation in crowds,
^[10] energy systems, transportation systems, evacuation problems^[11] and wireless communications.^[12]
Nash equilibrium is named after American mathematician John Forbes Nash Jr. The same idea was used in a particular application in 1838 by Antoine Augustin Cournot in his theory of oligopoly.^[13] In
Cournot's theory, each of several firms choose how much output to produce to maximize its profit. The best output for one firm depends on the outputs of the others. A Cournot equilibrium occurs when
each firm's output maximizes its profits given the output of the other firms, which is a pure-strategy Nash equilibrium. Cournot also introduced the concept of best response dynamics in his analysis
of the stability of equilibrium. Cournot did not use the idea in any other applications, however, or define it generally.
The modern concept of Nash equilibrium is instead defined in terms of mixed strategies, where players choose a probability distribution over possible pure strategies (which might put 100% of the
probability on one pure strategy; such pure strategies are a subset of mixed strategies). The concept of a mixed-strategy equilibrium was introduced by John von Neumann and Oskar Morgenstern in their
1944 book The Theory of Games and Economic Behavior, but their analysis was restricted to the special case of zero-sum games. They showed that a mixed-strategy Nash equilibrium will exist for any
zero-sum game with a finite set of actions.^[14] The contribution of Nash in his 1951 article "Non-Cooperative Games" was to define a mixed-strategy Nash equilibrium for any game with a finite set of
actions and prove that at least one (mixed-strategy) Nash equilibrium must exist in such a game. The key to Nash's ability to prove existence far more generally than von Neumann lay in his definition
of equilibrium. According to Nash, "an equilibrium point is an n-tuple such that each player's mixed strategy maximizes [their] payoff if the strategies of the others are held fixed. Thus each
player's strategy is optimal against those of the others." Putting the problem in this framework allowed Nash to employ the Kakutani fixed-point theorem in his 1950 paper to prove existence of
equilibria. His 1951 paper used the simpler Brouwer fixed-point theorem for the same purpose.^[15]
Game theorists have discovered that in some circumstances Nash equilibrium makes invalid predictions or fails to make a unique prediction. They have proposed many solution concepts ('refinements' of
Nash equilibria) designed to rule out implausible Nash equilibria. One particularly important issue is that some Nash equilibria may be based on threats that are not 'credible'. In 1965 Reinhard
Selten proposed subgame perfect equilibrium as a refinement that eliminates equilibria which depend on non-credible threats. Other extensions of the Nash equilibrium concept have addressed what
happens if a game is repeated, or what happens if a game is played in the absence of complete information. However, subsequent refinements and extensions of Nash equilibrium share the main insight on
which Nash's concept rests: the equilibrium is a set of strategies such that each player's strategy is optimal given the choices of the others.
Nash equilibrium
A strategy profile is a set of strategies, one for each player. Informally, a strategy profile is a Nash equilibrium if no player can do better by unilaterally changing their strategy. To see what
this means, imagine that each player is told the strategies of the others. Suppose then that each player asks themselves: "Knowing the strategies of the other players, and treating the strategies of
the other players as set in stone, can I benefit by changing my strategy?"
For instance if a player prefers "Yes", then that set of strategies is not a Nash equilibrium. But if every player prefers not to switch (or is indifferent between switching and not) then the
strategy profile is a Nash equilibrium. Thus, each strategy in a Nash equilibrium is a best response to the other players' strategies in that equilibrium.^[16]
Formally, let ${\displaystyle S_{i}}$ be the set of all possible strategies for player ${\displaystyle i}$, where ${\displaystyle i=1,\ldots ,N}$. Let ${\displaystyle s^{*}=(s_{i}^{*},s_{-i}^{*})}$
be a strategy profile, a set consisting of one strategy for each player, where ${\displaystyle s_{-i}^{*}}$ denotes the ${\displaystyle N-1}$ strategies of all the players except ${\displaystyle i}$.
Let ${\displaystyle u_{i}(s_{i},s_{-i}^{*})}$ be player i's payoff as a function of the strategies. The strategy profile ${\displaystyle s^{*}}$ is a Nash equilibrium if
${\displaystyle u_{i}(s_{i}^{*},s_{-i}^{*})\geq u_{i}(s_{i},s_{-i}^{*})\;\;{\rm {for\;all}}\;\;s_{i}\in S_{i}}$
A game can have more than one Nash equilibrium. Even if the equilibrium is unique, it might be weak: a player might be indifferent among several strategies given the other players' choices. It is
unique and called a strict Nash equilibrium if the inequality is strict so one strategy is the unique best response:
${\displaystyle u_{i}(s_{i}^{*},s_{-i}^{*})>u_{i}(s_{i},s_{-i}^{*})\;\;{\rm {for\;all}}\;\;s_{i}\in S_{i},s_{i}eq s_{i}^{*}}$
The strategy set ${\displaystyle S_{i}}$ can be different for different players, and its elements can be a variety of mathematical objects. Most simply, a player might choose between two strategies,
e.g. ${\displaystyle S_{i}=\{{\text{Yes}},{\text{No}}\}.}$ Or, the strategy set might be a finite set of conditional strategies responding to other players, e.g. ${\displaystyle S_{i}=\{{\text{Yes}}|
p={\text{Low}},{\text{No}}|p={\text{High}}\}.}$ Or, it might be an infinite set, a continuum or unbounded, e.g. ${\displaystyle S_{i}=\{{\text{Price}}\}}$ such that ${\displaystyle {\text{Price}}}$
is a non-negative real number. Nash's existing proofs assume a finite strategy set, but the concept of Nash equilibrium does not require it.
Strict/Non-strict equilibrium
Suppose that in the Nash equilibrium, each player asks themselves: "Knowing the strategies of the other players, and treating the strategies of the other players as set in stone, would I suffer a
loss by changing my strategy?"
If every player's answer is "Yes", then the equilibrium is classified as a strict Nash equilibrium.^[17]
If instead, for some player, there is exact equality between the strategy in Nash equilibrium and some other strategy that gives exactly the same payout (i.e. the player is indifferent between
switching and not), then the equilibrium is classified as a weak^[note 1] or non-strict Nash equilibrium.
Equilibria for coalitions
The Nash equilibrium defines stability only in terms of individual player deviations. In cooperative games such a concept is not convincing enough. Strong Nash equilibrium allows for deviations by
every conceivable coalition.^[18] Formally, a strong Nash equilibrium is a Nash equilibrium in which no coalition, taking the actions of its complements as given, can cooperatively deviate in a way
that benefits all of its members.^[19] However, the strong Nash concept is sometimes perceived as too "strong" in that the environment allows for unlimited private communication. In fact, strong Nash
equilibrium has to be Pareto efficient. As a result of these requirements, strong Nash is too rare to be useful in many branches of game theory. However, in games such as elections with many more
players than possible outcomes, it can be more common than a stable equilibrium.
A refined Nash equilibrium known as coalition-proof Nash equilibrium (CPNE)^[18] occurs when players cannot do better even if they are allowed to communicate and make "self-enforcing" agreement to
deviate. Every correlated strategy supported by iterated strict dominance and on the Pareto frontier is a CPNE.^[20] Further, it is possible for a game to have a Nash equilibrium that is resilient
against coalitions less than a specified size, k. CPNE is related to the theory of the core.
Nash's existence theorem
Nash proved that if mixed strategies (where a player chooses probabilities of using various pure strategies) are allowed, then every game with a finite number of players in which each player can
choose from finitely many pure strategies has at least one Nash equilibrium, which might be a pure strategy for each player or might be a probability distribution over strategies for each player.
Nash equilibria need not exist if the set of choices is infinite and non-compact. For example:
• A game where two players simultaneously name a number and the player naming the larger number wins does not have a NE, as the set of choices is not compact because it is unbounded.
• Each of two players chooses a real number strictly less than 5 and the winner is whoever has the biggest number; no biggest number strictly less than 5 exists (if the number could equal 5, the
Nash equilibrium would have both players choosing 5 and tying the game). Here, the set of choices is not compact because it is not closed.
However, a Nash equilibrium exists if the set of choices is compact with each player's payoff continuous in the strategies of all the players.^[21]
Rosen's existence theorem
Rosen^[22] extended Nash's existence theorem in several ways. He considers an n-player game, in which the strategy of each player i is a vector s[i] in the Euclidean space R^mi[.] Denote m:=m[1]+...+
m[n]; so a strategy-tuple is a vector in R^m. Part of the definition of a game is a subset S of R^m such that the strategy-tuple must be in S. This means that the actions of players may potentially
be constrained based on actions of other players. A common special case of the model is when S is a Cartesian product of convex sets S[1],...,S[n], such that the strategy of player i must be in S[i].
This represents the case that the actions of each player i are constrained independently of other players' actions. If the following conditions hold:
• T is convex, closed and bounded;
• Each payoff function u[i] is continuous in the strategies of all players, and concave in s[i] for every fixed value of s[-i].
Then a Nash equilibrium exists. The proof uses the Kakutani fixed-point theorem. Rosen also proves that, under certain technical conditions which include strict concavity, the equilibrium is unique.
Nash's result refers to the special case in which each S[i] is a simplex (representing all possible mixtures of pure strategies), and the payoff functions of all players are bilinear functions of the
The Nash equilibrium may sometimes appear non-rational in a third-person perspective. This is because a Nash equilibrium is not necessarily Pareto optimal.
Nash equilibrium may also have non-rational consequences in sequential games because players may "threaten" each other with threats they would not actually carry out. For such games the subgame
perfect Nash equilibrium may be more meaningful as a tool of analysis.
Coordination game
A coordination game showing payoffs for player 1 (row) \ player 2 (column)
Player 1 strategy Player 2 strategy
Player 2 adopts strategy A Player 2 adopts strategy B
Player 1 adopts strategy A
Player 1 adopts strategy B
The coordination game is a classic two-player, two-strategy game, as shown in the example payoff matrix to the right. There are two pure-strategy equilibria, (A,A) with payoff 4 for each player and
(B,B) with payoff 2 for each. The combination (B,B) is a Nash equilibrium because if either player unilaterally changes their strategy from B to A, their payoff will fall from 2 to 1.
The Stag Hunt
Player 1 strategy Player 2 strategy
Hunt stag Hunt rabbit
Hunt stag
Hunt rabbit
A famous example of a coordination game is the stag hunt. Two players may choose to hunt a stag or a rabbit, the stag providing more meat (4 utility units, 2 for each player) than the rabbit (1
utility unit). The caveat is that the stag must be cooperatively hunted, so if one player attempts to hunt the stag, while the other hunts the rabbit, the stag hunter will totally fail, for a payoff
of 0, whereas the rabbit hunter will succeed, for a payoff of 1. The game has two equilibria, (stag, stag) and (rabbit, rabbit), because a player's optimal strategy depends on their expectation on
what the other player will do. If one hunter trusts that the other will hunt the stag, they should hunt the stag; however if they think the other will hunt the rabbit, they too will hunt the rabbit.
This game is used as an analogy for social cooperation, since much of the benefit that people gain in society depends upon people cooperating and implicitly trusting one another to act in a manner
corresponding with cooperation.
Driving on a road against an oncoming car, and having to choose either to swerve on the left or to swerve on the right of the road, is also a coordination game. For example, with payoffs 10 meaning
no crash and 0 meaning a crash, the coordination game can be defined with the following payoff matrix:
The driving game
Player 1 strategy Player 2 strategy
Drive on the left Drive on the right
Drive on the left
Drive on the right
In this case there are two pure-strategy Nash equilibria, when both choose to either drive on the left or on the right. If we admit mixed strategies (where a pure strategy is chosen at random,
subject to some fixed probability), then there are three Nash equilibria for the same case: two we have seen from the pure-strategy form, where the probabilities are (0%, 100%) for player one, (0%,
100%) for player two; and (100%, 0%) for player one, (100%, 0%) for player two respectively. We add another where the probabilities for each player are (50%, 50%).
Network traffic
Sample network graph. Values on edges are the travel time experienced by a "car" traveling down that edge. is the number of cars traveling via that edge.
An application of Nash equilibria is in determining the expected flow of traffic in a network. Consider the graph on the right. If we assume that there are ${\displaystyle x}$ "cars" traveling from A
to D, what is the expected distribution of traffic in the network?
This situation can be modeled as a "game", where every traveler has a choice of 3 strategies and where each strategy is a route from A to D (one of ABD, ABCD, or ACD). The "payoff" of each strategy
is the travel time of each route. In the graph on the right, a car travelling via ABD experiences travel time of ${\displaystyle 1+{\frac {x}{100}}+2}$, where ${\displaystyle x}$ is the number of
cars traveling on edge AB. Thus, payoffs for any given strategy depend on the choices of the other players, as is usual. However, the goal, in this case, is to minimize travel time, not maximize it.
Equilibrium will occur when the time on all paths is exactly the same. When that happens, no single driver has any incentive to switch routes, since it can only add to their travel time. For the
graph on the right, if, for example, 100 cars are travelling from A to D, then equilibrium will occur when 25 drivers travel via ABD, 50 via ABCD, and 25 via ACD. Every driver now has a total travel
time of 3.75 (to see this, a total of 75 cars take the AB edge, and likewise, 75 cars take the CD edge).
Notice that this distribution is not, actually, socially optimal. If the 100 cars agreed that 50 travel via ABD and the other 50 through ACD, then travel time for any single car would actually be
3.5, which is less than 3.75. This is also the Nash equilibrium if the path between B and C is removed, which means that adding another possible route can decrease the efficiency of the system, a
phenomenon known as Braess's paradox.
Competition game
A competition game
Player 1 strategy Player 2 strategy
Choose "0" Choose "1" Choose "2" Choose "3"
Choose "0" 0, 0 2, −2 2, −2 2, −2
Choose "1" −2, 2 1, 1 3, −1 3, −1
Choose "2" −2, 2 −1, 3 2, 2 4, 0
Choose "3" −2, 2 −1, 3 0, 4 3, 3
This can be illustrated by a two-player game in which both players simultaneously choose an integer from 0 to 3 and they both win the smaller of the two numbers in points. In addition, if one player
chooses a larger number than the other, then they have to give up two points to the other.
This game has a unique pure-strategy Nash equilibrium: both players choosing 0 (highlighted in light red). Any other strategy can be improved by a player switching their number to one less than that
of the other player. In the adjacent table, if the game begins at the green square, it is in player 1's interest to move to the purple square and it is in player 2's interest to move to the blue
square. Although it would not fit the definition of a competition game, if the game is modified so that the two players win the named amount if they both choose the same number, and otherwise win
nothing, then there are 4 Nash equilibria: (0,0), (1,1), (2,2), and (3,3).
Nash equilibria in a payoff matrix
There is an easy numerical way to identify Nash equilibria on a payoff matrix. It is especially helpful in two-person games where players have more than two strategies. In this case formal analysis
may become too long. This rule does not apply to the case where mixed (stochastic) strategies are of interest. The rule goes as follows: if the first payoff number, in the payoff pair of the cell, is
the maximum of the column of the cell and if the second number is the maximum of the row of the cell - then the cell represents a Nash equilibrium.
A payoff matrix – Nash equilibria in bold
Player 1 strategy Player 2 strategy
Option A Option B Option C
Option A 0, 0 25, 40 5, 10
Option B 40, 25 0, 0 5, 15
Option C 10, 5 15, 5 10, 10
We can apply this rule to a 3×3 matrix:
Using the rule, we can very quickly (much faster than with formal analysis) see that the Nash equilibria cells are (B,A), (A,B), and (C,C). Indeed, for cell (B,A), 40 is the maximum of the first
column and 25 is the maximum of the second row. For (A,B), 25 is the maximum of the second column and 40 is the maximum of the first row; the same applies for cell (C,C). For other cells, either one
or both of the duplet members are not the maximum of the corresponding rows and columns.
This said, the actual mechanics of finding equilibrium cells is obvious: find the maximum of a column and check if the second member of the pair is the maximum of the row. If these conditions are
met, the cell represents a Nash equilibrium. Check all columns this way to find all NE cells. An N×N matrix may have between 0 and N×N pure-strategy Nash equilibria.
The concept of stability, useful in the analysis of many kinds of equilibria, can also be applied to Nash equilibria.
A Nash equilibrium for a mixed-strategy game is stable if a small change (specifically, an infinitesimal change) in probabilities for one player leads to a situation where two conditions hold:
1. the player who did not change has no better strategy in the new circumstance
2. the player who did change is now playing with a strictly worse strategy.
If these cases are both met, then a player with the small change in their mixed strategy will return immediately to the Nash equilibrium. The equilibrium is said to be stable. If condition one does
not hold then the equilibrium is unstable. If only condition one holds then there are likely to be an infinite number of optimal strategies for the player who changed.
In the "driving game" example above there are both stable and unstable equilibria. The equilibria involving mixed strategies with 100% probabilities are stable. If either player changes their
probabilities slightly, they will be both at a disadvantage, and their opponent will have no reason to change their strategy in turn. The (50%,50%) equilibrium is unstable. If either player changes
their probabilities (which would neither benefit or damage the expectation of the player who did the change, if the other player's mixed strategy is still (50%,50%)), then the other player
immediately has a better strategy at either (0%, 100%) or (100%, 0%).
Stability is crucial in practical applications of Nash equilibria, since the mixed strategy of each player is not perfectly known, but has to be inferred from statistical distribution of their
actions in the game. In this case unstable equilibria are very unlikely to arise in practice, since any minute change in the proportions of each strategy seen will lead to a change in strategy and
the breakdown of the equilibrium.
Finally in the eighties, building with great depth on such ideas Mertens-stable equilibria were introduced as a solution concept. Mertens stable equilibria satisfy both forward induction and backward
induction. In a game theory context stable equilibria now usually refer to Mertens stable equilibria.
If a game has a unique Nash equilibrium and is played among players under certain conditions, then the NE strategy set will be adopted. Sufficient conditions to guarantee that the Nash equilibrium is
played are:
1. The players all will do their utmost to maximize their expected payoff as described by the game.
2. The players are flawless in execution.
3. The players have sufficient intelligence to deduce the solution.
4. The players know the planned equilibrium strategy of all of the other players.
5. The players believe that a deviation in their own strategy will not cause deviations by any other players.
6. There is common knowledge that all players meet these conditions, including this one. So, not only must each player know the other players meet the conditions, but also they must know that they
all know that they meet them, and know that they know that they know that they meet them, and so on.
Where the conditions are not met
Examples of game theory problems in which these conditions are not met:
1. The first condition is not met if the game does not correctly describe the quantities a player wishes to maximize. In this case there is no particular reason for that player to adopt an
equilibrium strategy. For instance, the prisoner's dilemma is not a dilemma if either player is happy to be jailed indefinitely.
2. Intentional or accidental imperfection in execution. For example, a computer capable of flawless logical play facing a second flawless computer will result in equilibrium. Introduction of
imperfection will lead to its disruption either through loss to the player who makes the mistake, or through negation of the common knowledge criterion leading to possible victory for the player.
(An example would be a player suddenly putting the car into reverse in the game of chicken, ensuring a no-loss no-win scenario).
3. In many cases, the third condition is not met because, even though the equilibrium must exist, it is unknown due to the complexity of the game, for instance in Chinese chess.^[23] Or, if known,
it may not be known to all players, as when playing tic-tac-toe with a small child who desperately wants to win (meeting the other criteria).
4. The criterion of common knowledge may not be met even if all players do, in fact, meet all the other criteria. Players wrongly distrusting each other's rationality may adopt counter-strategies to
expected irrational play on their opponents’ behalf. This is a major consideration in "chicken" or an arms race, for example.
Where the conditions are met
In his Ph.D. dissertation, John Nash proposed two interpretations of his equilibrium concept, with the objective of showing how equilibrium points can be connected with observable phenomenon.
(...) One interpretation is rationalistic: if we assume that players are rational, know the full structure of the game, the game is played just once, and there is just one Nash equilibrium, then
players will play according to that equilibrium.
This idea was formalized by R. Aumann and A. Brandenburger, 1995, Epistemic Conditions for Nash Equilibrium, Econometrica, 63, 1161-1180 who interpreted each player's mixed strategy as a conjecture
about the behaviour of other players and have shown that if the game and the rationality of players is mutually known and these conjectures are commonly known, then the conjectures must be a Nash
equilibrium (a common prior assumption is needed for this result in general, but not in the case of two players. In this case, the conjectures need only be mutually known).
A second interpretation, that Nash referred to by the mass action interpretation, is less demanding on players:
[i]t is unnecessary to assume that the participants have full knowledge of the total structure of the game, or the ability and inclination to go through any complex reasoning processes. What is
assumed is that there is a population of participants for each position in the game, which will be played throughout time by participants drawn at random from the different populations. If there
is a stable average frequency with which each pure strategy is employed by the average member of the appropriate population, then this stable average frequency constitutes a mixed strategy Nash
For a formal result along these lines, see Kuhn, H. and et al., 1996, "The Work of John Nash in Game Theory", Journal of Economic Theory, 69, 153–185.
Due to the limited conditions in which NE can actually be observed, they are rarely treated as a guide to day-to-day behaviour, or observed in practice in human negotiations. However, as a
theoretical concept in economics and evolutionary biology, the NE has explanatory power. The payoff in economics is utility (or sometimes money), and in evolutionary biology is gene transmission;
both are the fundamental bottom line of survival. Researchers who apply games theory in these fields claim that strategies failing to maximize these for whatever reason will be competed out of the
market or environment, which are ascribed the ability to test all strategies. This conclusion is drawn from the "stability" theory above. In these situations the assumption that the strategy observed
is actually a NE has often been borne out by research.^[24]
Extensive and Normal form illustrations that show the difference between SPNE and other NE. The blue equilibrium is not subgame perfect because player two makes a non-credible threat at 2(2) to be
unkind (U).
The Nash equilibrium is a superset of the subgame perfect Nash equilibrium. The subgame perfect equilibrium in addition to the Nash equilibrium requires that the strategy also is a Nash equilibrium
in every subgame of that game. This eliminates all non-credible threats, that is, strategies that contain non-rational moves in order to make the counter-player change their strategy.
The image to the right shows a simple sequential game that illustrates the issue with subgame imperfect Nash equilibria. In this game player one chooses left(L) or right(R), which is followed by
player two being called upon to be kind (K) or unkind (U) to player one, However, player two only stands to gain from being unkind if player one goes left. If player one goes right the rational
player two would de facto be kind to her/him in that subgame. However, The non-credible threat of being unkind at 2(2) is still part of the blue (L, (U,U)) Nash equilibrium. Therefore, if rational
behavior can be expected by both parties the subgame perfect Nash equilibrium may be a more meaningful solution concept when such dynamic inconsistencies arise.
Proof using the Kakutani fixed-point theorem
Nash's original proof (in his thesis) used Brouwer's fixed-point theorem (e.g., see below for a variant). This section presents a simpler proof via the Kakutani fixed-point theorem, following Nash's
1950 paper (he credits David Gale with the observation that such a simplification is possible).
To prove the existence of a Nash equilibrium, let ${\displaystyle r_{i}(\sigma _{-i})}$ be the best response of player i to the strategies of all other players.
${\displaystyle r_{i}(\sigma _{-i})=\mathop {\underset {\sigma _{i}}{\operatorname {arg\,max} }} u_{i}(\sigma _{i},\sigma _{-i})}$
Here, ${\displaystyle \sigma \in \Sigma }$, where ${\displaystyle \Sigma =\Sigma _{i}\times \Sigma _{-i}}$, is a mixed-strategy profile in the set of all mixed strategies and ${\displaystyle u_{i}}$
is the payoff function for player i. Define a set-valued function ${\displaystyle r\colon \Sigma \rightarrow 2^{\Sigma }}$ such that ${\displaystyle r=r_{i}(\sigma _{-i})\times r_{-i}(\sigma _{i})}$.
The existence of a Nash equilibrium is equivalent to ${\displaystyle r}$ having a fixed point.
Kakutani's fixed point theorem guarantees the existence of a fixed point if the following four conditions are satisfied.
1. ${\displaystyle \Sigma }$ is compact, convex, and nonempty.
2. ${\displaystyle r(\sigma )}$ is nonempty.
3. ${\displaystyle r(\sigma )}$ is upper hemicontinuous
4. ${\displaystyle r(\sigma )}$ is convex.
Condition 1. is satisfied from the fact that ${\displaystyle \Sigma }$ is a simplex and thus compact. Convexity follows from players' ability to mix strategies. ${\displaystyle \Sigma }$ is nonempty
as long as players have strategies.
Condition 2. and 3. are satisfied by way of Berge's maximum theorem. Because ${\displaystyle u_{i}}$ is continuous and compact, ${\displaystyle r(\sigma _{i})}$ is non-empty and upper hemicontinuous.
Condition 4. is satisfied as a result of mixed strategies. Suppose ${\displaystyle \sigma _{i},\sigma '_{i}\in r(\sigma _{-i})}$, then ${\displaystyle \lambda \sigma _{i}+(1-\lambda )\sigma '_{i}\in
r(\sigma _{-i})}$. i.e. if two strategies maximize payoffs, then a mix between the two strategies will yield the same payoff.
Therefore, there exists a fixed point in ${\displaystyle r}$ and a Nash equilibrium.^[25]
When Nash made this point to John von Neumann in 1949, von Neumann famously dismissed it with the words, "That's trivial, you know. That's just a fixed-point theorem." (See Nasar, 1998, p. 94.)
Alternate proof using the Brouwer fixed-point theorem
We have a game ${\displaystyle G=(N,A,u)}$ where ${\displaystyle N}$ is the number of players and ${\displaystyle A=A_{1}\times \cdots \times A_{N}}$ is the action set for the players. All of the
action sets ${\displaystyle A_{i}}$ are finite. Let ${\displaystyle \Delta =\Delta _{1}\times \cdots \times \Delta _{N}}$ denote the set of mixed strategies for the players. The finiteness of the ${\
displaystyle A_{i}}$s ensures the compactness of ${\displaystyle \Delta }$.
We can now define the gain functions. For a mixed strategy ${\displaystyle \sigma \in \Delta }$, we let the gain for player ${\displaystyle i}$ on action ${\displaystyle a\in A_{i}}$ be
${\displaystyle {\text{Gain}}_{i}(\sigma ,a)=\max\{0,u_{i}(a,\sigma _{-i})-u_{i}(\sigma _{i},\sigma _{-i})\}.}$
The gain function represents the benefit a player gets by unilaterally changing their strategy. We now define ${\displaystyle g=(g_{1},\dotsc ,g_{N})}$ where
${\displaystyle g_{i}(\sigma )(a)=\sigma _{i}(a)+{\text{Gain}}_{i}(\sigma ,a)}$
for ${\displaystyle \sigma \in \Delta ,a\in A_{i}}$. We see that
${\displaystyle \sum _{a\in A_{i}}g_{i}(\sigma )(a)=\sum _{a\in A_{i}}\sigma _{i}(a)+{\text{Gain}}_{i}(\sigma ,a)=1+\sum _{a\in A_{i}}{\text{Gain}}_{i}(\sigma ,a)>0.}$
Next we define:
${\displaystyle {\begin{cases}f=(f_{1},\cdots ,f_{N}):\Delta \to \Delta \\f_{i}(\sigma )(a)={\frac {g_{i}(\sigma )(a)}{\sum _{b\in A_{i}}g_{i}(\sigma )(b)}}&a\in A_{i}\end{cases}}}$
It is easy to see that each ${\displaystyle f_{i}}$ is a valid mixed strategy in ${\displaystyle \Delta _{i}}$. It is also easy to check that each ${\displaystyle f_{i}}$ is a continuous function of
${\displaystyle \sigma }$, and hence ${\displaystyle f}$ is a continuous function. As the cross product of a finite number of compact convex sets, ${\displaystyle \Delta }$ is also compact and
convex. Applying the Brouwer fixed point theorem to ${\displaystyle f}$ and ${\displaystyle \Delta }$ we conclude that ${\displaystyle f}$ has a fixed point in ${\displaystyle \Delta }$, call it ${\
displaystyle \sigma ^{*}}$. We claim that ${\displaystyle \sigma ^{*}}$ is a Nash equilibrium in ${\displaystyle G}$. For this purpose, it suffices to show that
${\displaystyle \forall i\in \{1,\cdots ,N\},\forall a\in A_{i}:\quad {\text{Gain}}_{i}(\sigma ^{*},a)=0.}$
This simply states that each player gains no benefit by unilaterally changing their strategy, which is exactly the necessary condition for a Nash equilibrium.
Now assume that the gains are not all zero. Therefore, ${\displaystyle \exists i\in \{1,\cdots ,N\},}$ and ${\displaystyle a\in A_{i}}$ such that ${\displaystyle {\text{Gain}}_{i}(\sigma ^{*},a)>0}$.
${\displaystyle \sum _{a\in A_{i}}g_{i}(\sigma ^{*},a)=1+\sum _{a\in A_{i}}{\text{Gain}}_{i}(\sigma ^{*},a)>1.}$
So let
${\displaystyle C=\sum _{a\in A_{i}}g_{i}(\sigma ^{*},a).}$
Also we shall denote ${\displaystyle {\text{Gain}}(i,\cdot )}$ as the gain vector indexed by actions in ${\displaystyle A_{i}}$. Since ${\displaystyle \sigma ^{*}}$ is the fixed point we have:
{\displaystyle {\begin{aligned}\sigma ^{*}=f(\sigma ^{*})&\Rightarrow \sigma _{i}^{*}=f_{i}(\sigma ^{*})\\&\Rightarrow \sigma _{i}^{*}={\frac {g_{i}(\sigma ^{*})}{\sum _{a\in A_{i}}g_{i}(\sigma ^
{*})(a)}}\\[6pt]&\Rightarrow \sigma _{i}^{*}={\frac {1}{C}}\left(\sigma _{i}^{*}+{\text{Gain}}_{i}(\sigma ^{*},\cdot )\right)\\[6pt]&\Rightarrow C\sigma _{i}^{*}=\sigma _{i}^{*}+{\text{Gain}}_{i}
(\sigma ^{*},\cdot )\\&\Rightarrow \left(C-1\right)\sigma _{i}^{*}={\text{Gain}}_{i}(\sigma ^{*},\cdot )\\&\Rightarrow \sigma _{i}^{*}=\left({\frac {1}{C-1}}\right){\text{Gain}}_{i}(\sigma ^{*},\
cdot ).\end{aligned}}}
Since ${\displaystyle C>1}$ we have that ${\displaystyle \sigma _{i}^{*}}$ is some positive scaling of the vector ${\displaystyle {\text{Gain}}_{i}(\sigma ^{*},\cdot )}$. Now we claim that
${\displaystyle \forall a\in A_{i}:\quad \sigma _{i}^{*}(a)(u_{i}(a_{i},\sigma _{-i}^{*})-u_{i}(\sigma _{i}^{*},\sigma _{-i}^{*}))=\sigma _{i}^{*}(a){\text{Gain}}_{i}(\sigma ^{*},a)}$
To see this, first if ${\displaystyle {\text{Gain}}_{i}(\sigma ^{*},a)>0}$ then this is true by definition of the gain function. Now assume that ${\displaystyle {\text{Gain}}_{i}(\sigma ^{*},a)=0}$.
By our previous statements we have that
${\displaystyle \sigma _{i}^{*}(a)=\left({\frac {1}{C-1}}\right){\text{Gain}}_{i}(\sigma ^{*},a)=0}$
and so the left term is zero, giving us that the entire expression is ${\displaystyle 0}$ as needed.
So we finally have that
{\displaystyle {\begin{aligned}0&=u_{i}(\sigma _{i}^{*},\sigma _{-i}^{*})-u_{i}(\sigma _{i}^{*},\sigma _{-i}^{*})\\&=\left(\sum _{a\in A_{i}}\sigma _{i}^{*}(a)u_{i}(a_{i},\sigma _{-i}^{*})\right)
-u_{i}(\sigma _{i}^{*},\sigma _{-i}^{*})\\&=\sum _{a\in A_{i}}\sigma _{i}^{*}(a)(u_{i}(a_{i},\sigma _{-i}^{*})-u_{i}(\sigma _{i}^{*},\sigma _{-i}^{*}))\\&=\sum _{a\in A_{i}}\sigma _{i}^{*}(a){\
text{Gain}}_{i}(\sigma ^{*},a)&&{\text{ by the previous statements }}\\&=\sum _{a\in A_{i}}\left(C-1\right)\sigma _{i}^{*}(a)^{2}>0\end{aligned}}}
where the last inequality follows since ${\displaystyle \sigma _{i}^{*}}$ is a non-zero vector. But this is a clear contradiction, so all the gains must indeed be zero. Therefore, ${\displaystyle \
sigma ^{*}}$ is a Nash equilibrium for ${\displaystyle G}$ as needed.
If a player A has a dominant strategy ${\displaystyle s_{A}}$ then there exists a Nash equilibrium in which A plays ${\displaystyle s_{A}}$. In the case of two players A and B, there exists a Nash
equilibrium in which A plays ${\displaystyle s_{A}}$ and B plays a best response to ${\displaystyle s_{A}}$. If ${\displaystyle s_{A}}$ is a strictly dominant strategy, A plays ${\displaystyle s_{A}}
$ in all Nash equilibria. If both A and B have strictly dominant strategies, there exists a unique Nash equilibrium in which each plays their strictly dominant strategy.
In games with mixed-strategy Nash equilibria, the probability of a player choosing any particular (so pure) strategy can be computed by assigning a variable to each strategy that represents a fixed
probability for choosing that strategy. In order for a player to be willing to randomize, their expected payoff for each (pure) strategy should be the same. In addition, the sum of the probabilities
for each strategy of a particular player should be 1. This creates a system of equations from which the probabilities of choosing each strategy can be derived.^[16]
Matching pennies
Strategy Player B plays H Player B plays T
Player A plays H −1, +1 +1, −1
Player A plays T +1, −1 −1, +1
In the matching pennies game, player A loses a point to B if A and B play the same strategy and wins a point from B if they play different strategies. To compute the mixed-strategy Nash equilibrium,
assign A the probability ${\displaystyle p}$ of playing H and ${\displaystyle (1-p)}$ of playing T, and assign B the probability ${\displaystyle q}$ of playing H and ${\displaystyle (1-q)}$ of
playing T.
{\displaystyle {\begin{aligned}&\mathbb {E} [{\text{payoff for A playing H}}]=(-1)q+(+1)(1-q)=1-2q\\&\mathbb {E} [{\text{payoff for A playing T}}]=(+1)q+(-1)(1-q)=2q-1\\&\mathbb {E} [{\text
{payoff for A playing H}}]=\mathbb {E} [{\text{payoff for A playing T}}]\implies 1-2q=2q-1\implies q={\frac {1}{2}}\\&\mathbb {E} [{\text{payoff for B playing H}}]=(+1)p+(-1)(1-p)=2p-1\\&\mathbb
{E} [{\text{payoff for B playing T}}]=(-1)p+(+1)(1-p)=1-2p\\&\mathbb {E} [{\text{payoff for B playing H}}]=\mathbb {E} [{\text{payoff for B playing T}}]\implies 2p-1=1-2p\implies p={\frac {1}{2}}
Thus, a mixed-strategy Nash equilibrium in this game is for each player to randomly choose H or T with ${\displaystyle p={\frac {1}{2}}}$ and ${\displaystyle q={\frac {1}{2}}}$.
Free Money Game
Strategy Player B votes Yes Player B votes No
Player A votes Yes 1, 1 0, 0
Player A votes No 0, 0 0, 0
In 1971, Robert Wilson came up with the Oddness Theorem, ^[26] which says that "almost all" finite games have a finite and odd number of Nash equilibria. In 1993, Harsanyi published an alternative
proof of the result.^[27] "Almost all" here means that any game with an infinite or even number of equilibria is very special in the sense that if its payoffs were even slightly randomly perturbed,
with probability one it would have an odd number of equilibria instead.
The prisoner's dilemma, for example, has one equilibrium, while the battle of the sexes has three—two pure and one mixed, and this remains true even if the payoffs change slightly. The free money
game is an example of a "special" game with an even number of equilibria. In it, two players have to both vote "yes" rather than "no" to get a reward and the votes are simultaneous. There are two
pure-strategy Nash equilibria, (yes, yes) and (no, no), and no mixed strategy equilibria, because the strategy "yes" weakly dominates "no". "Yes" is as good as "no" regardless of the other player's
action, but if there is any chance the other player chooses "yes" then "yes" is the best reply. Under a small random perturbation of the payoffs, however, the probability that any two payoffs would
remain tied, whether at 0 or some other number, is vanishingly small, and the game would have either one or three equilibria instead.
1. This term is dispreferred, as it can also mean the opposite of a "strong" Nash equilibrium (i.e. a Nash equilibrium that is vulnerable to manipulation by groups).
|
{"url":"https://www.wikiwand.com/en/articles/Nash_equilibrium","timestamp":"2024-11-10T12:36:00Z","content_type":"text/html","content_length":"860737","record_id":"<urn:uuid:1084239f-4b03-46c2-80f9-6a98a9cd8305>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00778.warc.gz"}
|
All Categories
Now let's just put this out there - I'm not really a blogger! In fact any time I have posted on my blog it's made me cringe, I much prefer reading other peoples blogs than even Search for
considering writing my own. Recently though I have adapted something new into my teaching with my year 10 group that I thought would be nice to share now that's it's been tried and a topic
It's always a risk sharing your teaching ideas, just as students have different ways of learning we all seem to have different ways of teaching the same content, and quite often I see
people sharing ideas just to be hammered down by different styles of teachers. There seems to be a spectrum whereby on one side of the profession stands the (what I would call) 'pure Archives
mathematicians' and on the other side there stands the teachers who just enjoy 'teaching stuff'. I like to think I am standing somewhere in the middle, I like 'geeking out' and talking
about mathematical concepts whenever I can, but with the right group I also enjoy the occasional formula triangle (yeah I said it!) or even a mnemonic or rhyme to help memorise a
One thing I have always been a fan of is exploring 'how not to do something' or more importantly digging as far as I can into students misconceptions and why they think the way they do
and why it is they seem to make the same mistakes - even after appearing to understand it entirely only a week prior. This has been a little project of mine over the year, I have been
compiling common misconceptions throughout 4 sets of mocks/practice tests and general assessments with my year 10 higher achieving group since day 1 this year, and it has lead to this
little beauty (if I do say so myself).
'Our best mistakes' - recently someone walked past my classroom and wondered why on earth I had called it that. I responded to it with a joke in the moment but I am a big believer that
we learn the most from our mistakes, picking them apart and making a big deal about them in a positive and constructive way. So I decided to trial this idea out with my group. Behind
each tab we are faced with a particular question from an assessment we have covered earlier in the year (see below).
The next minute or so I spend questioning the mistakes and misconceptions that could arise from this topic. My questions are targeted at all of the students who I logged to make that
mistake across their papers - unbeknown to them! And I get them to pick apart 'how not to do it'. Once we are all happy and crystal clear how we should answer it... it's onto the fun
We now have a snap shot of someone in the group who made the mistake that they themselves have just identified. Behind the question mark is the secret identity of the student! Usually
said student hangs their head and admits defeat after recognising their hand writing but it has caused a great sense of 'buzz' in the room wanting to know who it was and the
anticipation that it might be them next! (I should mention here that I did ask for permission from my class to know who would not want to be included) as I could only fit half my class
of 32 in it was helping me out! After a bit of talking and discussing the misconception further, generally asking said student what they were thinking, we move on as a class and
attempt a similar question (see below)
Once we are finished we wrap it up with the solution and have it highlighted in our books. This in generally done in the last 10 minutes of a lesson or slotted in as a spaced break
when moving between topics or levels within a topic.
The whole thing has been a great success and when I pull students up on what the mistake was they made on the 'mistake thingy' as they call it, they still seem able to recall and
correct it (thankfully!).
Obviously I couldn't share the file but this was just an idea.... Let's celebrate mistakes!
0 Comments
2 Comments
Just made this revision poster to give to our year 11's along with their reports! Big thanks to @Corbettmaths and @Hegartymaths for making such fantastic videos that our year 11's are
finding so useful!
These QR codes are going to form our 'Coundown to the exam' process, there is just enough that students have 5 topics per week to scan and revise in the evening! Hope someone finds it
useful! The new
'Revision Page'
is now open with both posters added along with some learning grids with more to come!
MASSIVE thanks to @EJmaths for helping out with the QR pictures! Those things take a good few hours to download one by one! You have been a tremendous help recently!
Watch This Space!
2 Comments
Just made this revision poster to give to my year 11's at parents eveing tomorrow! Big thanks to @Corbettmaths and @Hegartymaths for making such fantastic videos!
These QR codes are going to form our 'Coundown to the exam' process, there is just enough that students have 5 topics per week to scan and revise in the evening! Hope someone finds it
useful! I'll be opening the new 'Revision Page' on saturday when I have some time to upload a few more resources onto the page, this will be available to download there but if you
can't wait here it is:
Download QR Poster
7 Comments
So after the last post on feedback I recieved a few messages asking about how I was using the feedback sheets in lessons and how I was using them for KS4 so here it is. Part of my
focus this year has been developing this process, and there are still some small aspects that I beleive need ironing out but it is currently maneageable and effective which has been my
aim from the start.
The main thing that I am doing now to gauge what feedback I give is using topic assessments every 2 weeks. The examples shown below are after a 2 week stretch on simultanous equations
with a high achieving KS4 set, at the time of the assessment we had covered algebraic linear and quadratic simultaneous equations and done a small amount on graphical linear
simultaneous equations. The only feedback I give now is on these assessments, after all if a student gets 20 questions wrong in their books on a particular subtopic and this has helped
lead to mastery when completing the assessments, why bother marking all those mistakes when everything is self assessed in class time and they now understand it as proven by the
assessment? I am strongly of the opinion that these periods of self assessment are incredibly powerful and I insist that all work must be marked and corrected in class so that I can
conduct my AFL during classtime.
After the class has done the assessment I get these marked and fedback on for the next upcoming lesson, this particular set took me 1 hour and 33 minutes (I was asked to time it by a
member of the department, so no.. I am not that sad), it took me slightly longer than usual as some of the methods and feedback elements were slightly more complex than basic fractions
with year 7. Again each memeber of the class recieved written feedback on their own relevent sheet, some had algebraic linear equations, some quadratic focussing on either the 'y='
method or the substitution method when using the equation of a circle, and others had graphical simultaneous equations. So feedback responses were all being targeted at indivudual
areas for improvement.
Once completed, the next lesson students have around 30 minutes to read through the feedback and complete the relevent rosponse areas, students have questions to work through focusing
on fluency, reasoning and problem for each topic. Along with this, each feedback sheet has it's own relevent 'help sheet' that you can see in the pictures below. One thing I found was
that giving 32 students a bunch of questions on a topic they couldn't do on the assessment was a recipe for a riot of 'I don't get it', as it was impossible to provide extended
feedback for 6 different sub topics beyond the written element in order for them to make a confident start on the questions. If needed students can collect the relevent help sheet and
start working straight away. I have found this particularly useful for some of my classes that I have focussed on building resilience, students are now directing themselves towards
'revision material' or 'help sheets' rather than relying on me. Some example pictures shown below:
So far this system has been working well, I have run it across all of my year groups a few times now and the quality of the responses has improved tremendously. Students have been more
engaged within the lesson and are becoming more and more motivated to improve upon areas they need to develop.
One student read their feedback the lesson after the DIRT session last week and said "Sir this has made my week, thank you!". Atleast I know it's being appreciated by some one.
14 Comments
It's not really something I am fond on putting my marking out into the public domain, so I'll start it off straight away with the reason I feel uncomfortable to do so. I know that my
marking isn't yet perfect, although I do beleive I am doing my best at this point in time. I have been constantly re-assessing my feedback this year and refining the way I approach it,
earlier in August I had just over 100 students fill in a questionnaire asking them what they thought of the feedback they received in mathematics, and asked them to write in some
qualitative feedback. This was actually really useful, and I would definitely recommend trying it out. Ultimately, this has helped lead to where I currently am with my own feedback
this new term. As teachers, marking is quite personal, especially if you have spent hours doing it! So having others pass judgement on it is always going to be a tough conversation (if
the conversations isn't a positive one anyway).
This is pretty much how I start the day, 6:45 in my classroom. I can not bring myself to take any marking home, I find that it takes me 10 minutes a book, apposed to 2 minutes when I
am in 'work mode' which doesn't tend to switch off in my classroom.. So I decided that I would get to work with enough time to put in an hours marking everyday if needed, this has
meant avoiding those long hours of book marking sitting on the sofa switching between watching Luther and making another cup of coffee.
So first of all, all of my marking tends to take place in class, ALL classwork is marked by students in green pen, there is rarely a task that we ever do in class that we don't get
instant feedback on and self assessed in class. I find that it very quickly allows me to undertake my AFL around the room when you can see ticks or crosses very clearly in the students
books. So the only part that I actually onduct a detailed mark on is end of topic assessments and teacher assessed homeworks, then finally on student responses to feedback. Fortunately
these tend to fall every 12-14 days which falls in line with the schools marking policy of two weeks. Below is a typical topic assessment, this is with a 'bottom set' year 7 class I
teach 3 times a week.
While marking the assessments I have (for this particular assessment) 5 different
feedback sheets
in front of me; simplifying fractions, finding fractions of an amount, multiplying/dividing fractions, adding/subtracting fractions, and converting improper fractions and mixed
numbers. As I am marking the assessment, I then pick up the sheet which each student needs to improve on and provide their feedback on this particular sheet. At the start of the next
lesson students then stick these sheets in on a double page spread and work on their responses on the opposite page. What I had found previously was that writing in a question was
taking me an extra few minutes per book, and then once in class it was extremely difficult to ensure that all students had completed that question as one student would finish in a few
minutes where as others would need much longer. This is why I have put these new sheets together, it gives students time to consolidate, challenge and extend the topic they struggled
on and allows them to work through reasoning and problem solving style questions in line with the new GCSE.
Some examples of books from this year 7 class are shown below:
So far, after around 9-10 DIRT sessions this term across all my classes, students have been really engaged in these feedback sessions, there is enough work to be getting on with that
it allows me to get around the class and provide verbal feedback to those who struggled more than others on particular topics, while those who are happy to work through the whole sheet
can move onto all levels of questions and get some challenge on the problem solving questions. I have been dedicating around 30 minutes of a lesson to these DIRT sessions so far and
have found this is just enough time that everyone will be able to complete the 'fluency' questions while others are completing all levels of questions.
The only written feedback I am now giving is on the
9-1 Feedback Sheets
I have been uploading. It's taking me on average 1 hour to mark a set of 30 mini assessments and provide feedback and actions for each student. And this is happening after every topic
or series of subtopics, for example my year 9's have been doing algebraic fractions and we have recently done an assessment on simplifying them. Students then had to work on either
factorising quadratics, simplifying basic algebraic fractions or simplifying algebraic fractions involving quadratics. See example image below:
Anyway that's what I'm doing! If you have been using these in a different way or have any ideas then please let me know! You can access all of these sheets on the link below:
10 Comments
6 Comments
Since this question appeared on the June 2015 non-calculator exam that my year 11's sat for their mocks, I have been searching for similar questions and resources. For some reason, and
party down to the fact that I can't figure out what the topic is called.. I can't find anything! So I've called it 'Funtional Volume' and put together some questions that I'll be using
in January.
Below there is a worksheet that I made with 6 questions, another worksheet with some past exam questions, a powerpoint and SMART notebook file that I have imported the images into to
be used on the interactive board alongside the worksheets.
6 Comments
4 Comments
I have just added 12 worksheets for the 13-15 Times Tables. They are exaclty the same as the previous worksheets and follow a step by step in the series:
2 - 13 Times Tables
2- 14 Times Tables
2 - 15 Times Tables
2 - 13-14 Times Tables
2 - 14-15 Times Tables
2 - 13-15 Times Tables
When printing select 'Multiple' as indicated in the picture below. Because there are two worksheets loaded into each document this allows you to print 2 sheets per A4 page! They come
out nicely in the smaller size and help with printing costs if you plan on using these every week.
4 Comments
6 Comments
Right, so I couldn't resist just making these!
Star Wars angry birds stickers! I'm getting a bit excited about the new Star Wars film that everything is currently Star Wars themed. I even went and bought my uncle a pair of
Chewbacca slippers this weekend for christmas (which I am now wearing!) Woops!
Never looked forward to marking my year 7 & 8 books so much! Last week some of my year 11's complained that they didn't get stickers though so I may have to print off a few more
Sticker Sheets Needed:
A4 Sheet Labels template
70 x 25mm round labels per sheet
Link Here
6 Comments
3 Comments
Latest game! With the upcoming Star Wars movie I got inspired to create a new times table game to use with my year 7 and 8 class! Introducing 'Math Invaders'... no medals required for
the creative game name there.
On normal mode the player has 11 game modes from 2-12 times tables, on each game mode the player gets 10 questions from each times table to answers in a 'space invaders' style
As you can see below the player has 3 lives and is given 3 difficulty levels which determine how fast the ships move..
In terms of the star wars theme these ships are meant to have a resemblence to the x-wing and tai fighters! Although for obvious reasons they had to be a little different!
The player moves left and right using the arrow keys and presses the space bar to shoot! At the end you get a screen to show which questions were answered correctly or if a mistake was
made this shows up on the screen also! (see below)
On arcade mode the player has to work from 2 all the way up to 12 without any breaks! See if you can do it on hard! They move quite fast!
Play Math Invaders Here!
3 Comments
2 Comments
Finally managed to get this game working after a mental 6 weeks back this year! Launching a new marking system and having some pretty heated debates around our marking policy has taken
my 'game making time' down to bare minimum!
This one I am really happy with though, I used this last week with a lower achieving year 7 class, by the end of the lesson I had most of them completing the 'red level' questions on
arcade mode and saying that it was easy! So I am feeling pretty happy with this game!
Looking back I should have made this one before the quadratic factorisation game but hey ho! The main concept is plugging in your double brackets, drag and drop the algebra discs into
the grid, then counting them up and completing the final answer! Pictures below!
Play here now!
The player is then able to click 'Check' and they will receive instant feedback on wether the answer is correct or not, while all the while receiving instant feedback throughout the
game as the squares will highlight green if they are correct or red if they are incorrect. After completing the question the player can choose to click 'Method' and this will display
the answer in an animation using the grid method. (Shown below).
As well as being able to enter ANY question into the workshop mode they are also able to play 'Arcade' mode, not although it doesn't follow the general style of an arcade mode function
with timings/levels etc it has 4 levels of questions which are colour coded, I used these to set certain colours to members of the class, being able to track where each student was and
seeing which questions they were trying out. A sample of the questions is shown below!
I hope you like it and if you try it out then please let me know how it goes or if you have any suggestions for improvements! I am really pleased with this one so far, I am now going
to go back and slightly re-design the
quadratic factorisation game
to include the Method function, this part took the longest though so it may not be till christmas now!
2 Comments
|
{"url":"https://www.accessmaths.co.uk/blog/category/all","timestamp":"2024-11-04T23:15:38Z","content_type":"text/html","content_length":"164820","record_id":"<urn:uuid:9a103754-72e0-4e58-8b49-f040bd6a8b8b>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00252.warc.gz"}
|
CSCI 3104 Problem Set 5
1. (15 pts) Bellatrix Lestrange is writing a secret message to Voldemort and wants to
prevent it from being understood by meddlesome young wizards and Muggles. She
decides to use Huffman encoding to encode the message. Magically, the symbol frequencies of the message are given by the Pell numbers, a famous sequence of integers
known since antiquity and related to the Fibonacci numbers. The nth Pell number is
defined as Pn = 2 Pn−1 + Pn−2 for n > 1 with base cases P0 = 0 and P1 = 1.
(a) For an alphabet of Σ = {a, b, c, d, e, f, g, h} with frequencies given by the first
|Σ| non-zero Pell numbers, give an optimal Huffman code and the corresponding
encoding tree for Bellatrix to use.
(b) Generalize your answer to (1a) and give the structure of an optimal code when
the frequencies are the first n non-zero Pell numbers.
2. (30 pts) A good hash function h(x) behaves in practice very close to the uniform hashing
assumption analyzed in class, but is a deterministic function. That is, h(x) = k each
time x is used as an argument to h(). Designing good hash functions is hard, and a
bad hash function can cause a hash table to quickly exit the sparse loading regime by
overloading some buckets and under loading others. Good hash functions often rely
on beautiful and complicated insights from number theory, and have deep connections
to pseudorandom number generators and cryptographic functions. In practice, most
hash functions are moderate to poor approximations of uniform hashing.
Consider the following hash function. Let U be the universe of strings composed of the
characters from the alphabet Σ = [A, . . . ,Z], and let the function f(xi) return the index
of a letter xi ∈ Σ, e.g., f(A) = 1 and f(Z) = 26. Finally, for an m-character string
x ∈ Σ
m, define h(x) = ([Pm
i=1 f(xi)] mod `), where ` is the number of buckets in the
hash table. That is, our hash function sums up the index values of the characters of a
string x and maps that value onto one of the ` buckets.
(a) The following list contains US Census derived last names:
Using these names as input strings, first choose a uniformly random 50% of these
name strings and then hash them using h(x).
Produce a histogram showing the corresponding distribution of hash locations
when ` = 200. Label the axes of your figure. Briefly describe what the figure
shows about h(x), and justify your results in terms of the behavior of h(x). Do
not forget to append your code.
Hint: the raw file includes information other than name strings, which will need to be removed;
and, think about how you can count hash locations without building or using a real hash table.
CSCI 3104
Problem Set 5
(b) Enumerate at least 4 reasons why h(x) is a bad hash function relative to the ideal
behavior of uniform hashing.
(c) Produce a plot showing (i) the length of the longest chain (were we to use chaining
for resolving collisions under h(x)) as a function of the number n of these strings
that we hash into a table with ` = 200 buckets, (ii) the exact upper bound on the
depth of a red-black tree with n items stored, and (iii) the length of the longest
chain were we to use a uniform hash instead of h(x). Include a guide of c n
Then, comment (i) on how much shorter the longest chain would be under a
uniform hash than under h(x), and (ii) on the value of n at which the red-black
tree becomes a more efficient data structure than h(x) and separately a uniform
3. (15 pts) Draco Malfoy is struggling with the problem of making change for n cents
using the smallest number of coins. Malfoy has coin values of v1 < v2 < · · · < vr for r coins types, where each coin’s value vi is a positive integer. His goal is to obtain a set of counts {di}, one
for each coin type, such that Pr i=1 di = k and where k is minimized. (a) A greedy algorithm for making change is the cashier’s algorithm, which all young wizards learn. Malfoy writes the following
pseudocode on the whiteboard to illustrate it, where n is the amount of money to make change for and v is a vector of the coin denominations: wizardChange(n,v,r) : d[1 .. r] = 0 // initial histogram
of coin types in solution while n > 0 {
k = 1
while ( k < r and v[k] > n ) { k++ }
if k==r { return ’no solution’ }
else { n = n – v[k] }
return d
Hermione snorts and says Malfoy’s code has bugs. Identify the bugs and explain
why each would cause the algorithm to fail.
(b) Sometimes the goblins at Gringotts Wizarding Bank run out of coins,1 and make
change using whatever is left on hand. Identify a set of U.S. coin denominations
for which the greedy algorithm does not yield an optimal solution. Justify your
It’s a little known secret, but goblins like to eat the coins. It isn’t pretty for the coins, in the end.
CSCI 3104
Problem Set 5
answer in terms of optimal substructure and the greedy-choice property. (The set
should include a penny so that there is a solution for every value of n.)
(c) (8 pts extra credit) On the advice of computer scientists, Gringotts has announced
that they will be changing all wizard coin denominations into a new set of coins
denominated in powers of c, i.e., denominations of c
, c1
, . . . , c`
for some integers
c > 1 and ` ≥ 1. (This will be done by a spell that will magically transmute old
coins into new coins, before your very eyes.) Prove that the cashier’s algorithm
will always yield an optimal solution in this case.
Hint: first consider the special case of c = 2.
|
{"url":"https://codingprolab.com/answer/csci-3104-problem-set-5/","timestamp":"2024-11-14T00:03:06Z","content_type":"text/html","content_length":"112644","record_id":"<urn:uuid:11074fc8-bad5-4e1b-a8dd-6607ababf400>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00130.warc.gz"}
|
A new numerical algorithm for the analytic continuation of Green's functions
The need to calculate the spectral properties of a Hermitian operator H frequently arises in the technical sciences. A common approach to its solution in volves the construction of the Green's
function operator G(z) = [z - H]^-1 in the complex z plane. For example, the energy spectrum and other physical properties of condensed matter systems can often be elegantly and naturally expressed
in terms of the Kohn-Sham Green's functions. However, the nonanalyticity of resolvents on the real axis makes them difficult to compute and manipulate. The Herglotz property of a Green's function
allows one to calculate it along an arc with a small but finite imaginary part, i.e., G(x + iy), and then to continue it to the real axis to determine quantities of physical interest. In the past,
finite-difference techniques have been used for this continuation. We present here a fundamentally new algorithm based on the fast Fourier transform which is both simpler and more effective.
All Science Journal Classification (ASJC) codes
• Numerical Analysis
• Modeling and Simulation
• Physics and Astronomy (miscellaneous)
• General Physics and Astronomy
• Computer Science Applications
• Computational Mathematics
• Applied Mathematics
Dive into the research topics of 'A new numerical algorithm for the analytic continuation of Green's functions'. Together they form a unique fingerprint.
|
{"url":"https://collaborate.princeton.edu/en/publications/a-new-numerical-algorithm-for-the-analytic-continuation-of-greens","timestamp":"2024-11-04T10:17:40Z","content_type":"text/html","content_length":"46486","record_id":"<urn:uuid:ab1aecad-b36f-4dd7-bac0-edda549ea96a>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00413.warc.gz"}
|
AIC with ALSs
Naked Pair and Almost Locked Set
A Locked Set is a group of cells (that can all see each other) of size N where the number of candidates in those cells is equal to the size of the group. That is N cells contain N candidates. A
solved cell or a clue is a Locked Set where N=1, but such a cell is not useful. The smallest useful Locked Set is a Naked Pair (where N=2) as in the [2,8] set in the diagram. The next smallest Locked
Set is a Naked Triple (N=3) and so on.
An Almost Locked Set (ALS) is N cells containing N+1 candidates. In the context of Alternating Inference Chains in this solver, an ALS is of size N=2 and the number of different candidates in those
cells is 3, although bigger ALS groups are possible. So an ALS of size 2 will be a two Conjugate Pairs plus one other candidate. In the diagram above the [2,8] pair are joined by a stray 6 which
stops it being a useful Naked Pair.
ALS in an AIC fragment
Lets continue with this ALS.
While solving a puzzle I am hunting around for Inference Chains and perhaps I find my chain turns ON the 6 in cell C4. That will remove all other 6s in the box including the 6 in our ALS. If that 6
is OFF then we create an on-the-fly Naked Pair.
Now, a Naked Pair eliminates candidates in the row or column (or box) it is aligned on so we can use this elimination property as part of our chain. This is the trick! By removing the 6 in B5 we fix
2 and 8 into those two cells so we can look along the row at other 2s and 8s and turn them OFF. This I do in cell B9. From there I can continue the inference chain. You get two cracks of the whip:
check both branches - the 2s and the 8s in the pseudo Naked Pair.
AIC with ALS : Load Example or : From the Start
A real life example now. This chain contains an ALS on the cells
(I used squiggly brackets to denote ALS as opposed to square brackets for
Grouped Cells
). 9s in row H are the entry point. We turn 9
which turns
the 9 in
- the extra candidate that makes the ALS an ALS. This gives us a Naked Pair of [5,7] that points up column 6 turning
the 7 in
and the chain continues.
Ultimately we use
Nice Loop Rule 2
to place 4 in
A4AIC on 4 (Discontinuous Alternating Nice Loop, length 12):
- Contradiction: When 4 is removed from A4 the chain implies it must be 4 - other candidates 2/5 can be removed
Off Chain eliminations : Load Example or : From the Start
This second example uses a chain to kill off-chain candidates, which is Nice Loop Rule 1. The ALS is in
and consists of [1/3/8] and [1/3] respectively. We turn off the extra candidate, 8 in
to enable the Naked Pair to be formed.
Alternating Inference Chain
AIC Rule 1: -3[B5]+6[B5]-6[B8]+6[D8]-8[D8]+8[D1]-8[F1]+3{F1|F4}-3[F3]+3[B3]-3[B5]
- Off-chain 6 taken off B9 - weak link: B5 to B8
- Off-chain candidates 1 taken off cell D8, link is between 6 and 8 in D8
- Off-chain 8 taken off F2 - weak link: D1 to F1
- Off-chain 8 taken off F3 - weak link: D1 to F1
- Off-chain 8 taken off J1 - weak link: D1 to F1
- Off-chain 3 taken off B4 - weak link: B3 to B5
... by: Himself
Sunday 13-Aug-2023
In the last example, I believe the chain should also eliminate the 1s in F8 and F9, since 1 will always be contained within the naked pair.
Andrew Stuart writes:
This is correct, logically. The solver however is doing a lot of work here but is not sophisticated to nab those eliminations. Might be too much to explain in one step!
... by: Ymiros
Friday 7-Jul-2023
Wouldn't this strategy also work with almost locked hidden sets?
Specifically for N=2 this would be a set of 2 digits that within a specific unit have only 2 possible positions, but one of them is allowed to have a 3rd possible position within that unit so if that
3rd possibility gets eliminated we can get rid of all other candidates from the hidden pair and possibly continue with a strong link from there.
... by: tebo
Thursday 11-May-2023
I'm curious why this Strategy is not in your solver's Extreme Strategies list. I have not worked through any examples of this with any solver, nor have I examined Robert's "almost fish", "forcing
net", or "AALS" logic. However, from your examples, it seems like the objective is to remove the N+1 Digit, leaving a Naked Subset. The regular AIC Strategy would still Eliminate the N+1 Candidate,
leaving the Naked Subset to be found by a following step in the Solution Path. I completely agree with removal of AIC with ALSs if it's simply an intellectual exercise.
Andrew Stuart writes:
It is in the solver! It is just not a partitioned strategy. It is implemented for all strategies that use alternating inference chains. It is a type of chain link and there are other ones: grouped
cells and URs for example. I don't think there is a simpler two-part process to using a chain with these elements, but would be interested in an example
... by: Robert
Monday 6-Dec-2021
I have worked a bit more on this.
An AIC with "almost locked sets" and also "almost fish" can solve the "unsolvables" #92 and #115 (and possibly others - I don't have all the unsolvables in my database).
A fish is basically the same thing as a locked set (which is itself the same thing as a hidden locked set).
If you think of a "fish" as consisting of "base sets" and "cover sets", where each set contains nine candidates, one of which must be true, then there are four kinds of such sets of candidates:
row-val: the nine candidates occurring in a particular row and with a particular value, but occurring in different columns.
col-val: the nine candidates occurring in a particular column and with a particular value, but occurring in different rows.
blk-val: the nine candidates occurring in a particular block and with a particular value, but occurring in different cells within that block.
row-col: the nine candidates occurring in a particular row and a particular column (so a cell), but having different values.
So a "fish" is then a group of base sets, and a group of cover sets, the same number of each type. If all the candidates in the base sets that have not already been eliminated, also occur in the
cover sets, then any candidates remaining in the cover sets but outside the base sets, can be eliminated.
If base sets are row-cols with the same row and different columns (so cells), and the cover sets are row-vals with the same row and different values, the "fish" is a locked set (within a row). Same
idea for columns and blocks.
If base sets are row-vals with the same row and different values, and the cover sets are row-cols with the same row and different columns, the "fish" is a hidden locked set (within a row). Same idea
for columns and blocks.
If base sets are row-vals with the same value and different rows, and the cover sets are col-vals with the same value and different columns, then this is a traditional fish. Same idea with rows and
columns switched.
So since there is really no conceptual difference between a locked set, a hidden locked set, and a fish, the concept of "almost locked sets" in an AIC extends to the other two as well. Hidden locked
sets are not so important, because if there is an "almost hidden locked set", there is also a conjugate "almost locked set". But the "almost fish" are not redundant with "almost locked sets" in an
... by: Robert
Tuesday 20-Jul-2021
And following up - I have implemented an "Almost Fish" feature in my AIC/forcing net software. That is, there is no fish, but as you start to follow the "off" implications of an AIC, a fish appears,
and can be used to turn some other candidates "off".
In my database of 235 "difficult" puzzles (of which 166 can already be solved), this does not get me any additional solved puzzles. However, it does get a small number of additional eliminations -
just not enough to solve fully the puzzles. So I think the idea of an "almost fish" being used as a link in an AIC is valid, although in practice it may not be found very often. (It's actually found
quite a lot in my sample, but most of the eliminations I get would eventually have been found anyway by other methods.)
I am now tending to use the terminology "dynamic locked set" or "dynamic fish" instead of "almost locked set" or "almost fish", to allow for their more general use in a forcing net. It can be that
there are multiple extra candidates preventing the existence of a locked set or a fish, but following the implications of the forcing net, those multiple extra candidates bring the locked set or fish
into existence (conditional on the initial assumption behind the forcing net).
... by: Robert
Tuesday 20-Jul-2021
I think the idea can be extended a bit.
The most obvious one would be to allow ALSs with more than two cells and more than three values. The description above is quite general, but from the comments, "In the context of Alternating
Inference Chains in this solver, an ALS is of size N=2 and the number of different candidates in those cells is 3, although bigger ALS groups are possible." I take it the implementation in the solver
is limited to two-cell ALSs. The current "unsolvable", #461, can be solved if larger ALSs are allowed.
There is another possible generalisation though. Suppose you have an AALS, two cells with four values. You begin by assuming some candidate somewhere is "on", and follow the implications through weak
and strong links. If *two* of the values in the AALS are eliminated, then (conditional on the initial assumption) it becomes an ALS, and we can draw further inference by eliminating the two values in
other cells in the same unit. This is even more likely to occur if we move to a "forcing net" way of doing things instead of a linear chain (related to the "AICs with Exotic Links" topic).
Unsolvables #411 and #412 can be solved in this way.
I have a database of 235 advanced puzzles (which cannot be solved using naked/hidden singles/pairs/triples/quads, box-line reduction and pointing pairs/triples), many of which have come from this
site, including the unsolvables, but some from other sources. I can now solve (with my own solver) 166 of them. 15 of those require a "forcing net" technique, sometimes including the generalisation
of the ALS technique as described above. However, my "forcing net with dynamic LS" algorithm is so slow it is painful - I need to improve it.
... by: Nono
Tuesday 4-Jun-2013
Question like Str8tsFan.
the last solver version 1.95 does not eliminate the 8 in J1.
No problem for good sudoku players !
Andrew Stuart writes:
Quite correct. There was some very odd code in there that stopped J1 from being eliminated because the link had already eliminated in the box. Can't say why, must have had a brain fart that day.
Diagram and solver updated.
... by: Str8tsFan
Wednesday 3-Oct-2012
I hope Andrew will ever read (and answer) these comments... About the second example:
(1) Using the link to the solver leads to a sudoku with a tiny little difference: B9 has an additional candidate 6, which is missing at the example. At the solver this candidate will be eliminated
with the very same example:
"- Off-chain candidate 6 taken off B9 - weak link: B5 to B8"
(2) What about the candidate 8 at J1? As far as I can understand the theory, the weak link "D1 to F1" should not only eliminate the 8s at F2 and F3 (weak link in same box) but also the 8 at J1 (weak
link in same column). More interesting: the solver doesn't eliminate that 8 at J1 either. Why? Did I make any mistake, or is it a flaw of the solver? As far as I understood, a weak link can be part
of two entities (box and row or box and column) and thus should be able to eliminate candidates at both entities, maybe the solver fails to check the second entity?
Andrew Stuart writes:
Thanks for the detailed hints. Both points correct. I've updated the diagram and solver.
... by: Mr Turner
Tuesday 10-Jul-2012
The last paragraph seems flawed since there is an 8 at F8. The 8s at F2 and F3 could be colored off as an inference from D1 being on. This implies F8 is on. F8 is also inferred to be on from D8 being
Andrew Stuart writes:
Correct. I've removed that paragraph now. Doh.
... by: Mr Archibald
Wednesday 22-Apr-2009
I call this the 6 pack.
the first three in a line on rows 1,2 is the some as the last six on row 3. this can work upsidedown and backtofront and sideways.
a b c d e f g h i
i h g f e d c b a
d e f a b c i h g
|
{"url":"https://www.sudokuwiki.org/AIC_with_ALSs","timestamp":"2024-11-04T05:00:16Z","content_type":"text/html","content_length":"33858","record_id":"<urn:uuid:88c227de-b1c2-49d4-92af-74e0f36fb5ac>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00401.warc.gz"}
|
SGDinference is an R package that provides estimation and inference methods for large-scale mean and quantile regression models via stochastic (sub-)gradient descent (S-subGD) algorithms. The
inference procedure handles cross-sectional data sequentially:
1. updating the parameter estimate with each incoming “new observation”,
2. aggregating it as a Polyak-Ruppert average, and
3. computing an asymptotically pivotal statistic for inference through random scaling.
The methodology used in the SGDinference package is described in detail in the following papers:
• Lee, S., Liao, Y., Seo, M.H. and Shin, Y., 2022. Fast and robust online inference with stochastic gradient descent via random scaling. In Proceedings of the AAAI Conference on Artificial
Intelligence (Vol. 36, No. 7, pp. 7381-7389). https://doi.org/10.1609/aaai.v36i7.20701.
• Lee, S., Liao, Y., Seo, M.H. and Shin, Y., 2023. Fast Inference for Quantile Regression with Tens of Millions of Observations. arXiv:2209.14502 [econ.EM] https://doi.org/10.48550/arXiv.2209.14502
You can install the development version from GitHub with:
We begin by calling the SGDinference package.
Case Study: Estimating the Mincer Equation
To illustrate the usefulness of the package, we use a small dataset included in the package. Specifically, the Census2000 dataset from Acemoglu and Autor (2011) consists of observations on 26,120
nonwhite, female workers. This small dataset is constructed from “microwage2000_ext.dta” at https://economics.mit.edu/people/faculty/david-h-autor/data-archive. Observations are dropped if hourly
wages are missing or years of education are smaller than 6. Then, a 5 percent random sample is drawn to make the dataset small. The following three variables are included:
• ln_hrwage: log hourly wages
• edyrs: years of education
• exp: years of potential experience
We now define the variables.
As a benchmark, we first estimate the Mincer equation and report the point estimates and their 95% heteroskedasticity-robust confidence intervals.
Estimating the Mean Regression Model Using SGD
We now estimate the same model using SGD.
It can be seen that the estimation results are similar between two methods. There is a different command that only computes the estimates but not confidence intervals.
We compare the execution times between two versions and find that there is not much difference in this simple example. By construction, it takes more time to conduct inference via sgdi_lm.
To plot the SGD path, we first construct a SGD path for the return to education coefficients.
Then, we can plot the SGD path.
To observe the initial paths, we now truncate the paths up to 2,000.
It can be seen that the SGD path almost converged only after the 2,000 steps, less than 10% of the sample size.
What else the package can do
See the vignette for the quantile regression example.
Acemoglu, D. and Autor, D., 2011. Skills, tasks and technologies: Implications for employment and earnings. In Handbook of labor economics (Vol. 4, pp. 1043-1171). Elsevier.
Lee, S., Liao, Y., Seo, M.H. and Shin, Y., 2022. Fast and robust online inference with stochastic gradient descent via random scaling. In Proceedings of the AAAI Conference on Artificial Intelligence
(Vol. 36, No. 7, pp. 7381-7389). https://doi.org/10.1609/aaai.v36i7.20701.
Lee, S., Liao, Y., Seo, M.H. and Shin, Y., 2023. Fast Inference for Quantile Regression with Tens of Millions of Observations. arXiv:2209.14502 [econ.EM] https://doi.org/10.48550/arXiv.2209.14502.
|
{"url":"https://cran.itam.mx/web/packages/SGDinference/readme/README.html","timestamp":"2024-11-03T18:42:31Z","content_type":"application/xhtml+xml","content_length":"19507","record_id":"<urn:uuid:4c60af65-9839-4fc4-8d71-95ba207d1355>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00060.warc.gz"}
|
ordered pair
What is ordered pair rule notation?
What is ordered pair rule notation?
For any two objects a and b, the ordered pair (a, b) is a notation specifying the two objects a and b, in that order.
How do you write set notation in math?
We use the symbol ∈ is used to denote membership in a set. Since 1 is an element of set B, we write 1∈B and read it as ‘1 is an element of set B’ or ‘1 is a member of set B’. Since 6 is not an
element of set B, we write 6∉B and read it as ‘6 is not an element of set B’ or ‘6 is not a member of set B’.
What is ordered pair in math?
An ordered pair is a pair of numbers, (x,y) , written in a particular order. The ordered pair (x,y) is not the same ordered pair as (y,x) .
What is the set of an ordered pair numbers?
Expert Answer. A set of ordered pairs is called a relation. If the x-coordinates of each ordered pair are unique, that is, each input of the set corresponds to only one output, then the set can be
more specifically called a function.
How do you write an ordered pair function?
Writing Linear Functions With Two Ordered Pairs
1. Use the two ordered pairs to find the slope using the formula m=y2−y1x2−x1.
2. Find the y-intercept by substituting the slope and one of the ordered pairs into f(x)=mx+b and solving for b.
3. Substitute the slope and y-intercept into the function f(x)=mx+b.
What is set notation example?
Common Set Notation |A|, called cardinality of A, denotes the number of elements of A. For example, if A={(1,2),(3,4)}, then |A|=2. A=B if and only if they have precisely the same elements. For
example, if A={4,9} and B={n2:n=2 or n=3}, then A=B.
What does X Y mean in set notation?
Set inclusion X ⊆ Y means every element of X is an element of Y ; X is a subset of Y . Set equality X = Y means every element of X is an element of Y and every element of Y is an element of X.
How is an ordered pair written?
Ordered pairs are often used to represent two variables. When we write (x, y) = (7, – 2), we mean x = 7 and y = – 2. The number which corresponds to the value of x is called the x-coordinate and the
number which corresponds to the value of y is called the y-coordinate.
Is a set of ordered pairs XY?
A relation is just a set of ordered pairs (x,y) . In formal mathematical language, a function is a relation for which: if (x1,y) and (x2,y) are both in the relation, then x1=x2 . This just says that
in a function, you can’t have two ordered pairs with the same x -value but different y -values.
What does an ordered pair look like?
An ordered pair contains the coordinates of one point in the coordinate system. A point is named by its ordered pair of the form of (x, y). The first number corresponds to the x-coordinate and the
second to the y-coordinate. To graph a point, you draw a dot at the coordinates that corresponds to the ordered pair.
|
{"url":"https://www.yoforia.com/what-is-ordered-pair-rule-notation/","timestamp":"2024-11-12T05:23:18Z","content_type":"text/html","content_length":"51130","record_id":"<urn:uuid:e7f497eb-b86e-400f-89f3-9fd865dbb27b>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00485.warc.gz"}
|
Calculate stock gross profit
Trying to figure out inventory or the cost of goods sold formula? In this lesson we' ll go over the income and expenses for a trading business, stock calculations Suppose that one month into the
current fiscal year, the company decides to use the gross profit margin from the previous year to estimate inventory. Net sales 7 Feb 2020 The cost of goods sold is the price of all inventory sold
which includes both fixed and variable costs. Fixed costs don't change based on
How to calculate net profit, what's included, important definitions to remember and Since net profit equals total revenue after expenses, to calculate net profit, you just Careful management of your
inventory can increase your cash flow and Use the information below to help you calculate both of these ratios. Give your answers to 1 d.p.. Sales Revenue. £73 000. Cost of Sales. Opening Stock.
Every business owner wants to calculate how much profit their business has made, As you can see, Company A spent a total of $330,000 on their inventory 6 Aug 2019 In order to calculate COGS, you
need to record inventory levels at the To calculate gross profit, subtract the total cost of goods sold during a 1 May 2019 Gross Profit, Net Sales + Closing Stock – Opening Stock – Cost of This
thing needs to be taken care of before calculating the gross margin. 11 Oct 2017 be that tough. Here's how to calculate gross profit margin, net profit, as well as what the difference is. You buy
your stock from a wholesaler.
GMROII is traditionally calculated by using one year's gross profit against the average of 12 or 13 units of inventory at cost. GMROII may vary depending on
The Stock Calculator uses the following basic formula: Profit (P) = ((SP * NS) - SC) - ((BP * NS) + BC) NS is the number of shares, SP is the selling price per share, To calculate gross profit, we
need to start with the gross sales. Gross sales are the first item in an income statement. We deduct the sales returns/sales discounts from gross sales and we get net sales. The next item in the
income statement is the costs of goods sold. A company's profit is calculated at three levels on its income statement, starting with the most basic – gross profit – and building up to the most
comprehensive – net profit. Between these two How to Calculate Closing Stock from a GP Margin Andrew is a small limited company business owner who is struggling to make sense of accounting for stock.
He has heard that stock can be calculated from his GP (Gross Profit) Margin but he doesn’t know how to to do that and has asked for our help. Calculate the gross profit by subtracting the cost from
the revenue. $50 - $30 = $20; Divide gross profit by revenue: $20 / $50 = 0.4. Express it as percentages: 0.4 * 100 = 40%. This is how you calculate profit margin or simply use our gross margin
calculator! As a consequence, since there exists closing stock at M/s Verma Traders during the end of the accounting period, this will change Gross Profit. That is, the Gross Profit figure turns out
to be worth Rs 57,000 as against Rs 42,000 as in the previous case.
Calculating Investment Returns. To avoid this sort of profit ambiguity, investment returns are expressed in percentages. The CTC investment was made at $10/share and sold at $17/share. The per share
gain is $7 ($17 - $10). Thus, your percentage return on your $10/share investment is 70% ($7 gain / $10 cost).
19 Sep 2012 The Turn/Earn Index will help you balance turnover and profits. It is calculated by multiplying inventory turns by the gross margin percentage. 25 Apr 2013 Calculating Profit Ratio (Gross
Margin) · accounting calculation stock-analysis ratio. When looking at a company's income statement to calculate
Your costs remain the same at $200. This means your gross profit is $52. (The math: $252 in sales revenue - $200 in cost of goods sold = $52 gross profit) This means the 20 percent discount you gave
wiped an incredible 54.8 percent off your gross profit.
31 Mar 2013 Know whether your business is making money. Next. --shares. Entrepreneur Staff . 13 May 2017 Follow these steps to estimate ending inventory using the gross profit method: Add together
the cost of beginning inventory and the cost of The following formula/equation is used to compute gross profit ratio: if you want to calculate gross profit with the figures of sales and closing
stock value and no Here we discuss how to calculate gross profit using its formula with examples, ( COGS) is calculated by subtracting the closing stock from the sum of opening It also includes
items that could change the value of the purchases of inventory. These items include such things as discounts you receive or freight that you pay on
The following formula/equation is used to compute gross profit ratio: if you want to calculate gross profit with the figures of sales and closing stock value and no
Every business owner wants to calculate how much profit their business has made, As you can see, Company A spent a total of $330,000 on their inventory 6 Aug 2019 In order to calculate COGS, you
need to record inventory levels at the To calculate gross profit, subtract the total cost of goods sold during a 1 May 2019 Gross Profit, Net Sales + Closing Stock – Opening Stock – Cost of This
thing needs to be taken care of before calculating the gross margin.
9 Dec 2019 Gross profit will appear on a company's income statement and can be calculated by subtracting the cost of goods sold (COGS) from revenue
|
{"url":"https://optioneepxmkvu.netlify.app/neimeyer58377pe/calculate-stock-gross-profit-222.html","timestamp":"2024-11-11T18:00:23Z","content_type":"text/html","content_length":"33436","record_id":"<urn:uuid:068b7b6b-4c67-4b7a-876a-7bdb1844245f>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00052.warc.gz"}
|
maths Archives - Aleph_epsilon
I gave the talk “A unified view towards diagonal arguments” at Archimedeans yesterday and the transcript is now available here.
The talk is on Lawvere fixed point theorem, beginning with cartesian closed categories, proceeding to state and prove the theorem in that context (although in the version I delivered most formal
treatment of categories was skipped for brevity and better intuition), and going on to give classical applications, culminating in Gödel incompleteness theorem.
As I said in the acknowledgement, I want to thank Archimedeans for organising this event, which I found a great way to know what my friends on the other side are doing in their own time. I am also
extremely grateful to those who have, in one way or another, helped me in the preparation, delivery and feedback of the talk.
Finally I would like to wrap up with a feedback I got from a friend, who asked me if I had some kind of superpower IA mathmo in mind when I was writing prerequisite for the talk. Yes and no: you are
an extremely strong IA mathmo if you went to the talk and understood everything on the spot. But even if you did not, there is some part of the talk (I hope) that makes sense to you, and more
importantly, I hope this has made a dent somewhere in your repertoire of maths knowledge so that later on when you learned, for example, Gödel incompleteness theorem along the way, the idea will
click with you. Perhaps with no coincidence, this is also the opinion I had towards category theory: learn it formally at some point and let time sink in.
So long!
Symmetric bilinear form vs. quadratic form
It has just struck me that symmetric bilinear forms and quadratic forms are different objects, corresponding to symmetric tensors and symmetric algebra respectively of certain tensor algebra (in this
post “tensor algebra” will take the narrow meaning of 2nd tensor power of a vector space). I first came to this observation when I was told in IB Linear Algebra that to recover a symmetric bilinear
form from a quadratic form, one requires the characteristic of the ground field to be not 2. This oddity can be explained by the “averaging” map
which requires 2 to be a unit. This explanation is stilted (at least to me) and troubled me since it hinges on this explicit construction and does not tell us a priori why so. It occurred to me
recently when I was going through some old notes that the formula is precisely the symmetrization map of the tensor algebra. It then follows easily that symmetric bilinear forms are images of the
“averaging” map and thus is the subalgebra of symmetric tensors. I’ll elaborate briefly on this idea in this post.
where the first isomorphism comes from Hom-tensor adjunction and the last one is an easy exercise. It is also possible to directly construct a bilinear map
In this way, symmetric tensors in
Now if the characteristic of
induces an isomorphism of
One more thing we get for free from this construction is that the space of bilinear forms is the direct sum of symmetric and skew-symmetric forms: since the symmetrization map (call it
where the former is the ideal generated by elements of the form
Just a word to end the post: as long as the characteristic of the field is not 2, it is possible to generalise this construction (and thus equivalence) to *-algebras. This is related to
epsilon-quadratic form.
D. Dummit & R. Foote, Abstract Algebra, §11.5
Recommendation: A beginner’s guide to forcing
I’ve just finished reading Timothy Chow’s A beginner’s guide to forcing and what a nice read! I recommend all readers, particularly those interested in model and set theory, to take a look.
I’m particularly fond of the exposition at the beginning addressing the validity of ZFC as the foundation of maths, which I had had qualms about at some point:
One common confusion about models of ZFC stems from a tacit expectation that some people have, namely that we are supposed to suspend all our preconceptions about sets when beginning the study of
ZFC. For example, it may have been a surprise to some readers to see that a universe is defined to be a set together with. . . . Wait a minute—what is a set? Isn’t it circular to define sets in
terms of sets? In fact, we are not defining sets in terms of sets, but universes in terms of sets. Once we see that all we are doing is studying a subject called “universe theory” (rather than
“set theory”), the apparent circularity disappears.
The reader may still be bothered by the lingering feeling that the point of introducing ZFC is to “make set theory rigorous” or to examine the foundations of mathematics. While it is true that
ZFC can be used as a tool for such philosophical investigations, we do not do so in this paper. Instead, we take for granted that ordinary mathematical reasoning—including reasoning about sets—is
perfectly valid and does not suddenly become invalid when the object of study is ZFC. That is, we approach the study of ZFC and its models in the same way that one approaches the study of any
other mathematical subject.
Also there is a very nice, if not the best I’ve seen, explanation of Skolem’s paradox. So much so for the introduction part, I’ll leave you to find out what forcing is and how it is used to prove the
independence of CH from ZFC.
Quadratic fields
An illustrated summary of IID Number Fields using quadratic fields is available here. Therein the following concepts/results are stated, usually accompanied by a worked example in quadratic field:
• ring of integers
• integral basis
• discriminant
• Dedekind’s criterion, unique factorisation & class group
• Minkowski’s theorem and application
• Dirichlet’s unit theorem
Different flavours of geometry
This post is almost verbatim from Undergraduate Algebraic Geometry § 0.3.
The specific flavour of algebraic geometry comes from the use of only polynomial functions (together with rational functions); to explain this, if
There are of course inclusions
These rings of functions correspond to some of the important categories of geometry:
The point I want to make here is that each of these inclusion signs represents an absolutely huge gap, and that this leads to the main characteristics of geometry in the different categories.
Although it’s not stressed very much in school and first year university calculus, any reasonable way of measuring
M. Reid, Undergraduate Algebraic Geometry, § 0.3
Determinant trick, Cayley-Hamilton theorem and Nakayama’s lemma
The post is also available as pdf.
Cayley-Hamilton theorem is usually presented in a linear algebra context, which says that an many proofs available, among which is the bogus proof of substituting the matrix into the characteristic
The reason it doesn’t work is because the product
I will develop the theory using language of rings and modules but if you don’t understand that, feel free to substitute “fields” and “vector spaces” in place.
There is a technical remark to make: later we will use determinant of matrices over
Given a module endomorphism
Cayley-Hamilton Theorem
Note that this is a relation of endomorphisms with coefficients in
Proof: Let
we have
Again, the multiplication is by scalar
Claim that if
which maps
To show this, recall that
where multiplication on the left is between matrices. Let
If you feel that little work is done in the proof and suspect it might be tautological somewhere (which I had when I first saw this proof), go through it again and convince yourself it is indeed a
bona fide proof. There are two tricks used here: firstly we extend the scalars by recognising
The key idea in the proof, sometimes called the determinant trick, has many applications in commutative algebra:
Proof: Let
An immediate corollary is Nakayama’s Lemma, which alone is an important result in commutative algebra:
Nakayama’s Lemma
Proof: Apply the trick to
We use the result to prove a rather interesting fact about module homomorphism:
Proof: Let
As a side note, the converse is not true: injective module homomorphisms need not be surjective. For example
M. Reid, Undergraduate Commutative Algebra, §2.6 – 2.8
|
{"url":"http://qk206.user.srcf.net/category/maths/","timestamp":"2024-11-13T05:28:47Z","content_type":"text/html","content_length":"150879","record_id":"<urn:uuid:51b6ae67-0092-4a23-ba59-6537d6321f62>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00185.warc.gz"}
|
The Julia Set
Fractals – a subset of Euclidean space with a fractal dimension that strictly exceeds its topological dimension – are one of my fondest topics in mathematics. I have always been fascinated by
fractals, and the first piece of code I ever wrote was to create fractals.
^The Julia set.
The Julia set is one of the most beautiful fractals to have been discovered. Named after the esteemed French mathematician Gaston Julia, the Julia set expresses a complex and rich set of dynamics,
giving rise to a wide range of breathtaking visuals.
In this article, we will explore how to use Python to create the Julia set.
Complex Numbers
The Julia set lies on the complex plane, which is a space populated by complex numbers. Each point you see in the image above corresponds to a specific complex number on the complex plane with value:
z = x + yi,
where i = √-1 is the imaginary unit.
Complex numbers have very interesting dynamics which is what gives the Julia set its complex dynamics, but for now let us take a look at real numbers, the numbers that we encounter in everyday life.
If you take the square of the real number 10, you get 100. Taking the square of 100 leads to 10,000. If we keep taking the square of the result, we get 100,000,000 in the next iteration, an
astronomically large number in the next, and an even larger number in the next. Doing this enough times, we eventually tend to infinity, and we can say that the operation described is not bounded.
On the other hand, if you take the square of the real number 0.1, you get 0.01. Taking the square of 0.01 leads to 0.0001, and as with above if we keep taking the square of the result, we get
0.00000001 in the next number, a microscopically small number in the next and an even smaller number in the next. Doing this enough times we eventually tend to zero, and we can say that the operation
described is bounded.
It turns out that complex numbers also behave similarly. If we take the square of the complex number 1 + i, we get 2i. If we square that we get -4. Repeating this we end up with 16, 256, 65536,
4294967296 and so on until we tend to infinity. This is similar to what happens if we iteratively square the result of the square of 10.
Likewise, if we take the square of 0.5 + 0.5i, we get 0.5i, -0.25, 0.0625, 0.00390625 and so on until we tend to 0. This is similar to what happens if we iteratively square the result of the square
of 0.1.
Iterative Operations on the Complex Plane
Now instead of doing this iterative squaring operation on a single complex number, we can do this for an grid of complex numbers z lying on the complex plane. In addition to the squaring operation,
we also add an arbitrary complex number c after the squaring operation to get the operation z := z + c.
After multiple iterations of z := z + c, most complex numbers in z will blow up to infinity, while some will remain bounded. During the iterative process we track how many iterations it takes for a
complex number in z to blow up to infinity. It is this number of iterations measurement for each complex number in the grid that contains the image of the fractal in the image above.
Creating the Julia Set with Python
The Python code to create the Julia set is given below. The function julia_set takes the arguments c an arbitrary complex number, num_iter the number of iterations to perform, N the number of grid
points on each axis in the grid, and X0 the limits of the grid.
import numpy as np
import matplotlib.pyplot as plt
def julia_set(c = -0.835 - 0.2321 * 1j, num_iter = 50,
N = 1000, X0 = np.array([-2, 2, -2, 2])):
# Limits of the complex grid.
x0 = X0[0]
x1 = X0[1]
y0 = X0[2]
y1 = X0[3]
# Set up the complex grid. Each element in the grid
# is a complex number x + yi.
x, y = np.meshgrid(np.linspace(x0, x1, N),
np.linspace(y0, y1, N) * 1j)
z = x + y
# F keeps track of which grid points are bounded
# even after many iterations of z := z**2 + c.
F = np.zeros([N, N])
# Iterate through the operation z := z**2 + c.
for j in range(num_iter):
z = z ** 2 + c
index = np.abs(z) < np.inf
F[index] = F[index] + 1
return np.linspace(x0, x1, N), np.linspace(y0, y1, N), F
During each step in the for loop, after z = z ** 2 + c we check which points in z are still smaller than np.inf. The number of iterations taken by each point to blow up to infinity is recorded in F.
By plotting F as a 2 dimensional image, the Julia set finally reveals itself. For example, the code below results in the image shown at the top of this post!
The light areas in the image correspond to points in the complex plane z that blow up to infinity very quickly after only several iterations, while the dark areas correspond to points that are
bounded even after many iterations!
x, y, F = julia_set(c = 0.285 + 0.01 * 1j, num_iter = 200,
N = 1000, X0 = np.array([-1.5, 1.5, -1.5, 1.5]))
plt.figure(figsize = (10, 10))
plt.pcolormesh(x, y, F, cmap = "gist_heat")
As mentioned earlier, the Julia set has a rich and complex set of dynamics. This can be explored by changing the value of c! For example if we use the value c = -0.835–0.2321i, we get the
visualization below, which is completely different from the previous one.
x, y, F = julia_set(c = -0.835 - 0.231 * 1j, num_iter = 200,
N = 1000, X0 = np.array([-1.5, 1.5, -1.5, 1.5]))
plt.figure(figsize = (10, 10))
plt.pcolormesh(x, y, F, cmap = "gist_heat")
^Also the Julia set.
In fact, we can take things one step further, and define c to be the entire complex grid, by using c = x + y instead using the value of the input argument in the code above. In this case, the
resulting fractal is so famous that it has its own name — the Mandelbrot set.
{ 0 comments… add one }
|
{"url":"http://www.raucci.net/2021/11/27/the-julia-set/","timestamp":"2024-11-03T13:15:10Z","content_type":"text/html","content_length":"66086","record_id":"<urn:uuid:ecc71a4f-22fa-4148-a5e0-db4807bd573d>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00573.warc.gz"}
|
Shape based kinetic outlier detection in real-time PCR
In the last few years, real-time quantitative polymerase chain reaction (real-time PCR) has become the technique of choice for absolute or relative quantification of gene expression due to its
rapidity, accuracy and sensitivity[
Furthermore, recent advances in the sequencing of the human genome, mRNA and miRNA expression profiling of numerous cancer types, disease-associated polymorphism identification and the expanding
availability of genomic sequence information for human pathogens have led to marked growth in molecular diagnostics [
The gold standard quantification method (Ct method) in real-time PCR assumes that the compared samples have similar PCR efficiencies. However, quantification by real-time PCR is very sensitive to
slight differences in PCR efficiencies among samples. Indeed, a small difference of 5% in PCR efficiency will result in a three-fold difference in the amount of DNA after 25 cycles of exponential
amplification. Many factors present in samples as well as co-extracted contaminants can inhibit PCR, confounding template amplification and analysis [
]. This is a major problem when working with biological samples. Severe inhibition will lead to false-negative results, whereas a slight to moderate inhibition can result in an underestimation of the
affected sample's DNA concentration [
]. Furthermore, amplification efficiency can fluctuate as a function of non-optimal assay design, enzyme instability, or the presence of inhibitors [
]. Although a variety of methods have been developed to quantify template DNA [
], very few allow simultaneous evaluation of template quantity and quality without the addition of an internal positive control that is co-amplified with the target of interest. Hence Bar and
co-workers proposed a method (called KOD) based on amplification efficiency calculation for the early detection of non-optimal assay conditions [
]. This approach is extremely straightforward and effective, but it is based on a PCR amplification efficiency calculation for which there is still not a method fully accepted by the scientific
community. A large number of studies have attempted to calculate amplification efficiency assuming that PCR is inherently exponential in nature. Based on the assumption of the log-linearity region,
constant amplification efficiency is calculated from the slope of linear regression in that window [
]. An alternative approach is based on the observation that PCR trajectory can be effectively modelled by the sigmoid function [
] allowing PCR efficiency to be estimated using non-linear regression fitting [
]. Recently, a simplified approach called "linear regression of efficiency" has allowed us to estimate amplification efficiency by applying linear regression analysis to the fluorescence readings
within the central region of amplification profile [
]. Notably, it has been demonstrated that estimates of PCR efficiency vary widely according to the approach that has been adopted [
Very recently, Tichopad et al. [
] introduced a new quality control test for quantitative PCR; in this procedure the first derivative maximum and the second derivative maximum were estimated using a logistic fitting on the PCR
trajectory. This approach allowed them to monitor the first half of the curve using two parameters. Our study aims to develop a quality test tool, which is not based on amplification efficiency
estimation, in order to detect samples that do not show an amplification kinetic similar to those of standard samples. In this work, a non-linear fitting of Richards equation was used to parameterize
PCR amplification profiles from a large sample set. The subsequent calculation of the variance of the estimated parameters and the development of a statistical measure based on the Mahalanobis
distance allowed us to develop the SOD method (
hape based kinetic
etection). The SOD analysis of inhibited amplifications and the comparison of this method with KOD were investigated in detail.
Materials And Methods
Quantitative Real-Time PCR
The DNA standard consisted of a pGEM-T (Promega) plasmid containing a 104 bp fragment of the mitochondrial gene NADH dehydrogenase 1 (MT-ND1) as insert. This DNA fragment was produced by the ND1/ND2
primer pair (forward ND1: 5'-ACGCCATAAAACTCTTCACCAAAG- 3' and reverse ND2: 5'-TAGTAGAAGAGCGATGGTGAGAGCTA- 3'). This plasmid was purified using the Plasmid Midi Kit (Qiagen) according to the
manufacturer's instructions. The final concentration of the standard plasmid was estimated spectophotometrically by averaging three replicate A
absorbance determinations.
Real time PCR amplifications were conducted using
LightCycler^® 480 SYBR Green I Master
(Roche) according to the manufacturer's instructions, with 500 nM primers and a variable amount of DNA standard in a 20 µl final reaction volume. Thermocycling was conducted using a
LightCycler^® 480
(Roche) initiated by a 10 min incubation at 95°C, followed by 40 cycles (95°C for 5 s; 60°C for 5 s; 72°C for 20 s) with a single fluorescent reading taken at the end of each cycle.
Each reaction combination, namely starting DNA and amplification mix percentage, was conducted in triplicate and repeated in four separate amplification runs. All the runs were completed with a melt
curve analysis to confirm the specificity of amplification and lack of primer dimers.
(fit point method) and
(second derivative method) values were determined by the
LightCycler^® 480
software version 1.2 and exported into an MS Excel data sheet (Microsoft) for analysis after background subtraction (available as Additional file 1). For
(fit point method) evaluation a fluorescence threshold manually set to 0.4 was used for all runs.
Estimation of PCR efficiency
The raw PCR data were used to calculate amplification efficiency. The PCR efficiency for each individual sample was derived from the slope of the regression line in the window of linearity [
]. Baseline correction and window of linearity identification were carried out using the last release of LinRegPCR [
]. PCR efficiencies were estimated from four sample sets: standard amplification curves, standard amplification curves added of tannic acid read-outs, standard amplification curves added of IgG
read-outs and standard amplification curves added of quercitin read-outs. The window of linearity calculated from all the meandata sets encompassed the fluorescence threshold of 0.4 chosen for the
quantitative analysis.
Mathematical model of KOD
The mathematical model of KOD, based on efficiency, was proposed by Bar et al. [
]. Briefly, this was done comparing PCR efficiency of a sample (
) with the efficiencies of standard curve samples. A test sample is classified as an outlier if |
| > 1.96 with
[ { z = \frac{x_{\emph{eff}} - \mu_{\emph{eff}}}{\sigma_{\emph{eff}} } } ]
, where
is the efficiency mean and
is the standard deviation of the efficiency of standard curve samples. Alternatively, it is to be considered that the statistic
{ \left ( \frac{x - \mu}{\sigma}\right )^2 }
is distributed as a c² with one degree of freedom; if c² > 3.84, we can reject the null hypothesis at a = 0.05.
Mathematical model of SOD
Shape based kinetic outlier detection (SOD) was based on the shapes of the amplification curves. In order to fit fluorescence raw data, nonlinear regression fitting of 5-parameter Richards function,
an extension of the logistic growth curve, was used [
$F_{x}=\frac{F_{max}}{\left [ 1+e^{(-\frac{1}{b}(x-c))}\right ]^{d}}+F_{b}$ Eq. 1
is the cycle number,
is the reaction fluorescence at cycle
is the maximal reaction fluorescence,
is the background reaction fluorescence and
represents the estimated coefficients. Nonlinear regressions for 5-parameter Richards functions were performed determining unweighted least squares estimates of parameters using the
The shape parameters used were the plateau value of amplification curve (
), tangent straight line slope in inflection point (
) and y-coordinate of inflection point (
)(Additional file 2).
The y-coordinate of inflection point (
) was calculated as follows:
$Y_{f}=\left ( \frac{d}{d+1} \right )^{d}$ Eq. 2
and the tangent straight line slope (
) was estimated as:
$m= \frac{F_{max}}{b} \left ( \frac{d}{d+1} \right )^{d+1}$ Eq. 3
Normal distribution of
F[max], Y[f]
parameters, obtained from standard samples, was checked using the
test for normality; the significance of the correlation between these parameters and input DNA concentrations, expressed as
was tested with a
test as follows:
$t= r \sqrt{\frac {n-2} {1-r^{2}}$ Eq. 4
is Pearson coefficient and
the sample size (
= 72). The multivariate normality of the adopted reference set was evaluated according to Rencher AC [
](Additional file 3). In addition, the asymmetry (
) of the amplification curves was estimated as follows:
$Asym = 1 - \frac{2\: Y_{f}}{F_{max}}$ Eq. 5
, Eq. 5 can be simplified as:
$Asym = 1 - 2\: \left ( \frac{d}{d+1} \right )^{d}$ Eq. 6
In agreement with this equation the curve is symmetric (that is Asym = 0) when d = 1, or
2*Yf = Fmax
. On the contrary, when d > 1 we have 2*Yf < Fmax (the curve is asymmetric) hence Asym > 0.
Statistical model of SOD
After developing a method to estimate three different shape-parameters (
), the next step was to set a criterion to identify test samples that deviated from expected values. This was done using sample vector
$y=\begin{bmatrix} F_{max}\\ Y_{f}\\ m \end{bmatrix}$
which can be calculated for each experimental amplification; if
belongs to a multivariate normal distribution, with mean vector
$\mu =\begin{bmatrix} \mu_{F_{max}} \\ \mu_{Y_{f}} \\ \mu_{m} \end{bmatrix}$
and Σ the corresponding variance-covariance matrix, the (
' Σ^-1
) value (Mahalanobis distance) has asymptotic c² distribution, with 3 degrees of freedom. The Mahalanobis distance is based on correlations between variables through which different patterns can be
identified and analyzed. It is a useful way of determining the similarity of an unknown multivariate sample set to a known one. It takes into account the correlations of the data set and is not
dependent on the scale of measurements. Mean vector and variance-covariance matrix were calculated from shape parameters of standard curve samples. Then if c² > 7.81, we can reject the null
hypothesis (with a = 0.05) and establish that the shape of the amplification curve is different from the shape of the standard curve samples, considering all three parameters [
]. All elaborations and graphics were obtained using Excel (Microsoft), Statistica 6.0 (Statsoft) and Statistical Package for Social Sciences (SPSS 13.0).
Standard curve SOD analysis
The SOD model relies on the assumption that in order to achieve a reliable quantification, the amplification curves of unknown samples should not be significantly different from those of the standard
curve. We introduced the idea that the amplification kinetic can be monitored by the shape of the amplification curve. The shape of amplification curves was parameterized using the nonlinear
regression fitting of the Richards function on the fluorescence readings [
This mathematical procedure allowed us to obtain the five parameters characteristic of the Richards equation. These values were subsequently used to calculate the slope of the tangent at the
inflection point (
), the y-coordinate of the inflection point (
) and the maximum fluorescence value (
) of a reading. Finally, these three parameters allowed us to create a "fingerprint" for each amplification curve.
Based on this assumption, the parameters
of the amplifications used to build a standard curve should not be significantly different from one another and should not be correlated with input DNA. To verify this assumption, a standard curve
was generated over a wide range of input DNA (3.14x10
; Fig. 1; Additional files 1).
Table 1 shows the mean, SD, and Kolmogorov- Smirnov test from a total of 72 runs. These results demonstrated that
were normally distributed, even though they showed a different dispersion. Subsequently, the relationship between
and the Log of the starting DNA template was studied. As shown in Fig. 2, there was not a significant correlation between the Log of input DNA and these parameters (
= 0.017
= 0.28;
= 0.033
= 0.12;
= 0.030
= 0.14). In fact, determination coefficients
quantified only a very low proportion of parameter variances less than 3,3%.
In order to objectively define an amplification profile as an outlier, we introduced the variable Log(
), which estimates errors from quantification analysis using the Ct method. This variable relies on the residues estimated as the difference between calculated molecules, using the Ct method (Log of
Number of Observed Molecules, referred to as LogNob), and input DNA molecules (Log of Expected Molecules, referred to as LogNexp; in fact LogNob-LogNexp = Log(
)). The ratio Log(
) showed a normal distribution satisfying the assumption of homoscedasticity (Additional file 4). It is thus possible to determine a 95% confidence interval (CI) for the variable Log(
). These residues showed a normal distribution regardless of the starting DNA template, with the average equal to zero and the standard deviation constant (σ = 0.041). In our database, out of a total
of 72 runs used to construct the standard curve, 6 runs showed the ratio Log(
) out of the CI (Additional file 5). Subsequently, PCR efficiency (Eff) was also estimated for each amplification curve; the Lin- RegPCR software [
] was used to fit the data points in the optimal range of the PCR exponential phase to obtain an automated evaluation of Eff (Table 1).
To determine how well outlier samples can be identified by KOD and SOD, we applied these statistical analyses to the runs of the standard curve; in particular we found that KOD identified 2 runs over
the c² threshold value of 3.84 while SOD revealed 3 runs out of the CI (Additional file 5). These outliers are probably false-positives due to the definition and intrinsic properties of the 95% CI.
Table 1: One-Sample Kolmogorov-Smirnov Test of calibration curve.
Means (standard deviation) and Kolmogorov-Smirnov (K-S) test value (probability) of the following parameters: LineReg efficiency (Eff), ordinate value of inflection point (y[f]), slope of tangent
straight line in inflection point (m) and plateau value (F[max]).
Eff. y[f] m Fmax
N= 72
Mean (S.D.) 1.88 (0.02) 23.89 (2.86) 8.61 (1.20) 46.41 (6.07)
K-S value (p) 0.99 (0.28) 1.25 (0.09) 0.75 (0.63) 1.13 (0.15)
Figure 1
Linear regression analysis of standard samples.
The amplification profiles were produced by averaging the fluorescence readings of twelve replicate reactions (A). Linear regression obtained plotting Log input DNA versus Ct (B).
Figure 2
Efficiency and shape parameter values of standard curve samples.
The plots of efficiency (A). F[max] (B). Y[f] (C) and m (D) were shown; we reported in abscisse the Log transformation of input DNA and in ordinate the parameter value. The square represents the
median. the length of the box shows the interquartile range and the whiskers indicate the min-max values of the estimated parameters.
Inhibitor effects on real-time amplification
Tannic acid oxidizes to form quinones which covalently bind to
DNA polymerase inhibiting its activity [
Real-time amplification plots from 3.5 x 10
DNA molecules in presence of increasing concentrations (0-0.1 mg per mL) of tannic acids were obtained. All the quantification values were obtained using the
method. The resulting amplification curves and the corresponding quantifications demonstrate the effects of inhibition on real-time analysis (Fig. 3A and 3B). As the tannic acid concentration
increased, the
values went up steadily leading to an underestimation of the starting molecules. This quantification error was highlighted when Log(
) dropped out the corresponding CI (Figure 3B). Suppressed amplification was demonstrated by the calculations of efficiency using LinRegPCR procedure (Additional file 5). The observed errors were the
result of the progressive reduction of the plateau, linear phase length and slope of the inhibited curves; together these effects led to increasing
values (Fig. 3A) [
Fig. 3 Effect of tannic acid inhibition on amplification curve shape.
Left upper panel: amplification profiles obtained from samples with equal starting number of template molecules and increasing inhibitor concentrations. For each inhibitor concentration only an
amplification curve was plotted (instead of all 6 replicates). Values over and under triangle indicator (at the upper right of figure) show the lowest and the highest inhibitor concentration used
Right upper panel: effect of PCR inhibition on the ratio Log(N[ob]/N[exp]) in presence of equal starting number of template molecules and increasing inhibitor concentration. The ratio Log(N[ob]/N
[exp]) represents the residues obtained from linear regression of calibration curves. where LogN[ob] is the number of calculated molecules using Ct method and N[exp] is the number of expected
molecules. Each symbol represents a single run. The abscisse axis is the mean and the dotted lines are 95% confidence interval of the Log(N[ob]/N[exp]) ratio calculated from standard curve runs (B).
Left lower panel: variation of F[max] , Y[f] and m versus increasing inhibitor concentration. The variation is expressed as Relative Error = $1-\frac{\bar{x}_{Parameter_{obs}}} {\mu _{Parameter_{\
mathrm{exp}}}}$ ; where $\bar{x}_{Parameter_{obs}}$ is the mean of parameter calculated for each inhibitor concentration; $\mu _{Parameter_{\mathrm{exp}}}$ represents the mean of parameter value from
standard curve samples (C).
Right lower panel: asymmetry values versus increasing inhibitor concentration. Asymmetry was computed as the following ratio: $Asym = 1 - \frac{2\: Y_{f}}{F_{max}}$ (D).
These data led us to investigate the modifications of the parameters
in response to increasing inhibitor concentrations. Fig. 3C shows the increase in relative error of
in the presence of increasing tannic acid concentrations. Notably, these results also showed that curve asymmetry (Eq. 5) increased with higher inhibitor concentrations. This in turn demonstrates
that not only the slope (
) and plateau (
) of the curve decreased but also the shape changed moving towards a more and more Richards' type kinetic (Fig. 3D).
Subsequently, we evaluated the effects of IgG and quercitin, molecules known to inhibit PCR, on amplification kinetics [
]. Both these molecules result in a significant underestimation of starting DNA molecules at high inhibitor concentrations (Fig. 4B and 5B). As shown in Fig. 4 and 5, we always found a change in
when the quantification error occurred.
Fig. 4 Effect of IgG inhibition on amplification curve shape.
For details refer to figure legend 3.
Fig. 5 Effect of quercitin inhibition on amplification curve shape.
For details refer to figure legend 3.
Furthermore, the asymmetry analysis showed an interesting singularity in the quercitin effects compared to those of tannic acid and IgG. In fact, quercitin led to kinetic alterations without a
significant effect on the curve symmetry (Fig. 5D).
SOD versus KOD analysis
SOD and KOD analyses were used to identify samples with aberrant PCR kinetics, due to inhibitor presence, which might lead to erroneous quantifications.
, and
values calculated from each amplification curve, obtained in the presence of increasing tannic acid, IgG or quercitin concentrations, were used to estimate the identified 2 runs over c²
value. Hence, if the c²
value from an amplification curve was higher than the threshold value 7.81, the quantification was defined as an outlier. PCR efficiencies were also estimated and c²
values determined from the same amplifications. Quantification curves with a c²
values over 3.84 were rejected.
Hence the SOD and KOD performances were evaluated according to their ability to identify an amplification as an outlier when the Log(
) ratio is not within 95% CI.
The results obtained by SOD and KOD analyses in the presence of increasing tannic acid concentrations are shown in Fig. 6A and 6B. When tannic acid concentrations ranging from 0.1 - 0.0125 mg/mL,
were added, all the obtained curves had significant quantification errors (Fig. 6A and 6B; full symbols indicate samples that showed the ratio Log(
) below the lower limit of 95% CI).
Fig. 6 Values of KOD and SOD related of each amplification curve versus Log of inhibitor concentration.
Symbols (squares and dots) represent the c² values related to each amplification curve obtained in the presence of different inhibitor concentrations. The horizontal continuous lines are the critical
values for detecting outliers (left panels: the KOD[c²] critical value is 3.84; right panels: the SOD[c²] critical value is 7.81; with a = 0.05).
Different inhibitors were used: Tannic acid (A-B), IgG (C-D) and Quercitin (E-F). True outliers (represented by black symbols; squares for KOD and dots for SOD) are amplification curves with Log(N
[ob]/N[exp]) ratio out of 95% confidence interval, while white symbols represent acceptable runs with Log(N[ob]/N[exp]) ratio included in 95% confidence interval. The 95% confidence interval has been
obtained from the amplification curves of the standard samples.
The black symbols, over the horizontal continuous line, are runs correctly detected as outliers. Conversely, black symbols under this line are undetected outliers.
These curves were associated with c²
values higher than the threshold value 7.81 (Fig. 6B ; the horizontal line shows c²
threshold value). In this concentration range, KOD analysis appeared to be less powerful than SOD. In fact, KOD found as outliers (c²
> 3.84) only 8 of the 24 curves showing a Log(
) ratio out of the 95% CI (Fig. 6A). There were no outliers under 0.00625 mg/mL tannic acid concentration, with the exception of some amplifications that were randomly out of the CI.
SOD and KOD analyses were also applied to real-time quantifications in the presence of IgG or quercitin as inhibitors. When amplification reactions were conducted in the presence of 2-0.5 µg/mL IgG,
the suppression of amplification was efficiently revealed by both SOD and KOD, though SOD was more sensitive than KOD. In fact, SOD highlighted 17 outliers versus 15 revealed by KOD out of a total of
17 outliers (in the presence of IgG 17 runs led to a Log(
) out of 95% CI (Fig. 6C and 6D).
Analogous results were also obtained for quercitin. In the presence of 0.04 mg/mL of quercitin, SOD found 6 outliers compared to the 3 revealed by KOD out of a total of 6 outliers (Fig. 6E and 6F;
for details of SOD and KOD analysis see Additional file 5). Finally, we defined as true positives (
) those amplifications showing c² > threshold value and those that led to a Log(
) ratio out of the 95% CI. Conversely, false positives (
) were defined as samples that showed the c² > threshold value and a Log(
) ratio within the 95% CI. Consequently, true negatives (
) were those amplifications showing c² < threshold value that led to a Log(
) ratio within the 95% CI and false negatives (
) those showing c² < threshold value and Log(
) ratio out of the 95% CI.
Based on these definitions, the
of SOD and KOD is represented by the ratio
$Sens = \frac{TP}{TP + FN}$
while the
is the ratio:
$Spec = \frac{TN}{TN + FP}$
. Table 2 shows that SOD was more sensitive than KOD in all the tested settings, while SOD and KOD were equally specific in the presence of IgG and quercitin. SOD was also more specific than KOD in
the presence of tannic acid.
Table 2: Sensitivity and specificity of KOD and SOD analysis.
KOD Tannic Acid IgG Quercitin
Sensitivity 0.30 0.76 0.50
Specificity 0.94 0.96 0.98
SOD Tannic Acid IgG Quercitin
Sensitivity 0.93 1.00 1.00
Specificity 1.00 0.94 0.98
A topic of great interest is the development of hand-free tools for the detection of aberrant amplification profiles in real-time PCR analysis. Real-time PCR has rapidly become the most widely used
technique in nucleic acid quantification. Although real-time PCR analysis has gained considerable attention in many fields of molecular biology, it is still troubled by significant technical problems
]. Hence the present study has focused on the investigation of a new outlier detection approach which is not based on the PCR efficiency estimate but rather on the shape of the amplification profile.
The amplification nature of PCR makes it vulnerable to small differences in efficiencies of compared samples [
]. In fact, the current "gold standard" in real-time PCR analysis, the threshold cycle method (called
method), requires similar PCR efficiencies among compared samples.
However, dissimilarity in PCR efficiency results from different starting material sources, for example, different types of tissues [
]. Such differences might also be found when inhibitors of
DNA polymerase are present in cDNA samples [
] or in the presence of low quality SYBR green and/or dNTPs [
]. Furthermore, the frequency of PCR inhibition [
]and different inhibitory effects even among replicates [
] highlight the need of kinetic quality assessment for each sample. Hence Bar et al. [
] proposed a statistical method, called KOD, to detect samples with dissimilar efficiencies.
KOD searches for outliers based on the main assumption that to obtain a reliable quantification, PCR runs have to show efficiencies which are not significantly different from each other. This
condition is verified comparing the slopes of the straight-line regression calculated in the window-of-linearity after the log-transformation of each read-out fluorescence. In other words, if we
return to raw data, the profile of the exponential curves in the window-of-linearity, mustn't be significantly different among compared runs. In the development of the SOD method we extended this
concept to the whole curve, and all the runs included in the analysis have to show comparable amplification profiles.
method is based on the analysis of a serially diluted target. An example of this approach is presented in Fig. 1A careful examination of the obtained amplification profiles illustrates the central
principle of the SOD method: all amplification curves are similar in shape and only the profile position is related to target quantity. The first amplification profiles, corresponding to the most
concentrated samples, are found on the left, whereas samples with an increasing dilution factor regularly shift towards the right. This observation led us to the insight that an exclusion criterion
could be based on the difference in shape rather than efficiency. This is in agreement with the work by Rutledge and Stewart [
] in which these authors described the amplification curve as a function of efficiency. Hence if efficiency determines the shape of a curve, by monitoring the shape of an amplification profile,
information concerning the efficiency of amplification can be obtained.
Firstly, a "fingerprint" for each amplification curve using
resulting from the fitting of the Richards equation on raw data was obtained. Subsequently, these parameters were used to obtain the variance-covariance matrix in order to calculate the
Mahalanobis distance
This statistical measure is based on correlations among variables through which different patterns can be identified and analysed. In particular, the SOD analysis made use of the Mahalanobis distance
to determine the similarity of an unknown sample compared to the standard set. This approach was very useful because it allowed us to evaluate not only the variance of single parameters (
), but also to quantify the reciprocal co-variations among
was considered in the development of SOD because this parameter demonstrates successful amplification and usually, in suboptimal amplification conditions, the read-outs do not reach characteristic
values [
]. Examining our database, it was noted that
showed high variance, thus it slightly affects c²
alone, but
had a significant impact on the variance-covariance matrix. The parameter
describes the slope of the curve in the inflection point [
]. In our model, the higher the value of
, the higher the amplification rate is. However, this estimator does not directly indicate the amplification efficiency understood as the proportion between current and previous product amounts [
Finally, the asymmetry of amplification profiles was monitored by the relationship between
. It has been demonstrated that absolutely symmetrical PCR curves seldom occur, justifying the introduction of a five-parameter fit [
]. Furthermore, in our previous work [
], it was demonstrated that the amplification reaction may deviate from a symmetric sigmoid curve to an asymmetric sigmoid (well described by Richards equation) in the presence of suboptimal
efficiency. In fact, the goodness of fit of the logistic model progressively decreased with lower efficiency suggesting a change of PCR curve amplification shape [
The correlation analysis between
obtained from the standard curve and input DNA demonstrated that these shape parameters are concentrationindependent. This supports our experimental hypothesis that all the amplification curves of
the standard curve are similar in shape and only the profile position determines target quantity. In the presence of PCR inhibition, it was found that increasing concentrations of tannic acid and IgG
resulted in decreasing
values, while asymmetry increased with higher inhibitor concentrations (when asymmetry increases,
decreases more than the corresponding
; Fig. 3 and 4). It may be that tannic acid inhibition is simply due to fluorescence quenching since we found a dramatic decrease in
and a slide curve slope decrease. However, we also showed that fluorescence asymmetry increased demonstrating that tannic acid produced an amplification kinetic distortion. The addition of quercitin
to PCR amplifications produced very interesting data. In fact, we found decreased
values in the presence of high inhibitor concentrations, however this flavonid did not induce an asymmetric modification of the curves (Fig. 5D). The reported data clearly demonstrate that the SOD
method can identify non-optimal PCR kinetics resulting from different inhibition models. Furthermore, the results obtained in the presence of quercitin highlight the importance of using a
multivariate approach.
When comparing SOD to KOD performance, it was found that SOD was more sensitive than KOD in all the tested settings. SOD and KOD were equally specific in the presence of IgG and quercitin, whereas
SOD was more specific than KOD in the presence of tannic acid.
Furthermore, the SOD method presents several advantages over KOD; SOD is completely hand-free. Indeed, it is not necessary for the user to identify a window of analysis as in the KOD method, and more
importantly, SOD does not rely on a constant efficiency value avoiding all the problems connected with its determination [
]. As previously reported, variable PCR efficiency determination can lead to different results contributing to erroneous and spread quantifications [
Moreover, log-transformation of fluorescence data that could be responsible for bias in the analysis are avoided.
The SOD method has been developed for the chemistry Sybr Green, and the application of this procedure to other chemistries such as TaqMan, needs to be evaluated extensively.
Very recently, Tichopad et al. [
] proposed a new KOD procedure based on Malahanobis statistic [
]. In this study the first derivative maximum and the second derivative maximum were estimated using a logistic fitting on the central portion of the PCR trajectory. Using these two parameters these
authors proposed monitoring only the first half of the curve. On the contrary, the SOD method is based on the possibility of describing the whole PCR trajectory using Richards equation. SOD
represents a continuation and an extension of the application of Richards equation to real-time PCR readings [
]. We think that the SOD method introduces original concepts that are not found in the recently developed method described by Tichopad et al. [
]. SOD takes advantage of the possibility of describing the shape of the whole PCR trajectory through the combination of the parameters
while the method by Tichopad et al. [
] focuses on two key points of the trajectory: the maximum of the first and second derivative.
Furthermore, in the SOD method we used quite a different metric approach. Although other multivariate methods are available for similar tasks (support vector machines, K-means cluster), we used
asymptotic distribution of the Mahalanobis distance because it is a logical extension of the KOD method, which is based on univariate normal distribution.
We demonstrated,
for the first time
, that a comparison of the shape variation of an amplification profile with the shape of standard profiles can be used to exclude aberrant samples from
analysis. This allows us to avoid the spread of results and therefore increases the potential of quantification analysis.
Hence we propose SOD as a hand-free quality control method in real-time PCR analysis with applications in any field of molecular diagnostics.
Additional Material
Additional file 1 -
Fluorescence data and fitting elaboration of standard sample amplifications (standard curve) and amplifications obtained in the presence of: tannic acid, IgG and quercitin.
Additional file 2 -
Analytical solutions for the y value of the inflection point (
.) and the slope of tangent straight-line (
) crossing the inflection point.
Additional file 3 -
• A) Chi-square distribution of the squared distances about the population mean vector (D2 = y - Σ)' Σ^-1(y - Σ)) with 3 degrees of freedom.
• B) Scatter plots of all pairs of variables F[max], Y[f] and m.
Additional file 2 -
amplifications (standard curve) and amplifications obtained in the presence of: tannic acid, IgG and quercitin.
: threshold cycle;
: immunoglobulin G;
: shape based kinetic outlier detection;
: kinetic outlier detection;
: Asymmetry.
Authors' contributions
MG and DS carried out the design of the study, participated in data analysis, developed the SOD method and drafted the manuscript. MBLR participated in data collection and analysis and critically
revised the manuscript. PT carried out the real-time PCR. DM participated in data collection. VS participated in the design of the study and critically revised the manuscript. All authors read and
approved the final manuscript.
Authors' Details
Dipartimento DiSUAN, Sezione di Biomatematica, Universit� degli Studi di Urbino "Carlo Bo", Campus Scientifico Sogesta; Localit� Crocicchia - 61029 Urbino, Italy and
Dipartimento di Scienze Biomolecolari, Sezione di Ricerca sull'Attivit� Motoria e della Salute, Universit� degli Studi di Urbino "Carlo Bo", Via I Maggetti, 26/2 - 61029 Urbino, Italy.
Cite this article as: Sisti et al., Shape based kinetic outlier detection in realtime PCR BMC Bioinformatics 2010, 11:186
1. 1. Gingeras TR, Higuchi R, Kricka LJ, Lo YM, Wittwer CT: Fifty years of molecular (DNA/RNA) diagnostics.
Clin Chem 2005 , 51(3):661-671. PubMed Abstract | Publisher Full Text | BioMed
2. Nolan T, Hands RE, Bustin SA: Quantification of mRNA using real-time RT-PCR.
Nature Protocols 2006 , 1(3):1559-1582. PubMed Abstract | Publisher Full Text BioMed
3. VanGuilder HD, Vrana KE, Freeman WM: Twenty-five years of quantitative PCR for gene expression analysis.
Bio Techniques 2008 , 44(5):619-626. BioMed
4. Gunson RN, Bennett S, Maclean A, Carman WF: Using multiplex real time PCR in order to streamline a routine diagnostic service.
J Clin Virol 2008 , 43(4):372-375.
5. Watzinger F, Ebner K, Lion T: Detection and monitoring of virus infections by real-time PCR.
Molecular aspects of medicine 2006 , 27(2-3):254-298. PubMed Abstract | Publisher Full Text | BioMed
6. Kaltenboeck B, Wang C: Advances in real-time PCR: application to clinical laboratory diagnostics.
Advances in clinical chemistry 2005 , 40:219-259. PubMed Abstract | Publisher Full Text | BioMed
7. Akane A, Matsubara K, Nakamura H, Takahashi S, Kimura K: Identification of the heme compound copurified with deoxyribonucleic acid (DNA) from bloodstains, a major inhibitor of polymerase chain
reaction (PCR) amplification.
Journal of forensic sciences 1994 , 39(2):362-372. PubMed Abstract | BioMed
8. Wilson IG: Inhibition and facilitation of nucleic acid amplification.
Applied and environmental microbiology 1997 , 63(10):3741-3751. PubMed Abstract | PubMed Central Full Text | BioMed
9. Tichopad A, Didier A, Pfaffl MW: Inhibition of real-time RT-PCR quantification due to tissue-specific contaminants.
Mol Cell Probes 2004 , 18(1):45-50. PubMed Abstract | Publisher Full Text | BioMed
10. Rossen L, Norskov P, Holmstrom K, Rasmussen OF: Inhibition of PCR by components of food samples, microbial diagnostic assays and DNA-extraction solutions.
International journal of food microbiology 1992 , 17(1):37-45. PubMed Abstract | Publisher Full Text | BioMed
11. Guescini M, Sisti D, Rocchi MB, Stocchi L, Stocchi V: A new real-time PCR method to overcome significant quantitative inaccuracy due to slight amplification inhibition.
BMC bioinformatics 2008 , 9:326. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text | BioMed
12. Kainz P: The PCR plateau phase - towards an understanding of its limitations.
Biochimica et biophysica acta 2000 , 1494(1-2):23-27. PubMed Abstract | Publisher Full Text | BioMed
13. Livak KJ, Schmittgen TD: Analysis of relative gene expression data using real-time quantitative PCR and the 2(-Delta Delta C(T)) Method.
Methods (San Diego, Calif) 2001 , 25(4):402-408. PubMed Abstract | Publisher Full Text | BioMed
14. Liu W, Saint DA: Validation of a quantitative method for real time PCR kinetics.
Biochem Biophys Res Commun 2002 , 294(2):347-353. PubMed Abstract | Publisher Full Text | BioMed
15. Rutledge RG: Sigmoidal curve-fitting redefines quantitative real-time PCR with the prospective of developing automated high-throughput applications.
Nucleic acids research 2004 , 32(22):e178. PubMed Abstract | Publisher Full Text | PubMed Central Full Text | BioMed
16. Pfaffl MW: A new mathematical model for relative quantification in real-time RT-PCR.
Nucleic acids research 2001 , 29(9):e45. PubMed Abstract | Publisher Full Text | PubMed Central Full Text | BioMed
17. Goll R, Olsen T, Cui G, Florholmen J: Evaluation of absolute quantitation by nonlinear regression in probe-based real-time PCR.
BMC bioinformatics 2006 , 7:107. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text | BioMed
18. Bar T, Stahlberg A, Muszta A, Kubista M: Kinetic Outlier Detection (KOD) in real-time PCR.
Nucleic acids research 2003 , 31(17):e105. PubMed Abstract | Publisher Full Text | PubMed Central Full Text | BioMed
19. Kontanis EJ, Reed FA: Evaluation of real-time PCR amplification efficiencies to detect PCR inhibitors.
Journal of forensic sciences 2006 , 51(4):795-804. PubMed Abstract | Publisher Full Text | BioMed
20. Ramakers C, Ruijter JM, Deprez RH, Moorman AF: Assumption-free analysis of quantitative real-time polymerase chain reaction (PCR) data.
Neurosci Lett 2003 , 339(1):62-66. PubMed Abstract | Publisher Full Text | BioMed
21. Wilhelm J, Pingoud A, Hahn M: Validation of an algorithm for automatic quantification of nucleic acid copy numbers by real-time polymerase chain reaction.
Anal Biochem 2003 , 317(2):218-225. PubMed Abstract | Publisher Full Text | BioMed
22. Wilhelm J, Pingoud A, Hahn M: SoFAR: software for fully automatic evaluation of real-time PCR data.
Bio Techniques 2003 , 34(2):324-332. BioMed
23. Ruijter JM, Ramakers C, Hoogaars WM, Karlen Y, Bakker O, Hoff MJ, Moorman AF: Amplification efficiency: linking baseline and bias in the analysis of quantitative PCR data.
Nucleic acids research 2009 , 37(6):e45. PubMed Abstract | Publisher Full Text | PubMed Central Full Text | BioMed
24. Liu W, Saint DA: A new quantitative method of real time reverse transcription polymerase chain reaction assay based on simulation of polymerase chain reaction kinetics.
Anal Biochem 2002 , 302(1):52-59. PubMed Abstract | Publisher Full Text | BioMed
25. Spiess AN, Feig C, Ritz C: Highly accurate sigmoidal fitting of real-time PCR data by introducing a parameter for asymmetry.
BMC bioinformatics 2008 , 9:221. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text | BioMed
26. Qiu H, Durand K, Rabinovitch-Chable H, Rigaud M, Gazaille V, Clavere P, Sturtz FG: Gene expression of HIF-1alpha and XRCC4 measured in human samples by real-time RT-PCR using the sigmoidal
curve-fitting method.
Bio Techniques 2007 , 42(3):355-362. BioMed
27. Rutledge RG, Stewart D: A kinetic-based sigmoidal model for the polymerase chain reaction and its application to high-capacity absolute quantitative real-time PCR.
BMC biotechnology 2008 , 8:47. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text | BioMed
28. Cikos S, Bukovska A, Koppel J: Relative quantification of mRNA: comparison of methods currently used for real-time PCR data analysis.
BMC molecular biology 2007 , 8:113. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text | BioMed
29. Tichopad A, Bar T, Pecen L, Kitchen RR, Kubista M, Pfaffl MW: Quality control for quantitative PCR based on amplification compatibility test.
Methods 2010 , 50(4):308-312. PubMed Abstract | Publisher Full Text | BioMed
30. Young CC, Burghoff RL, Keim LG, Minak-Bernero V, Lute JR, Hinton SM: Polyvinylpyrrolidone-Agarose Gel Electrophoresis Purification of Polymerase Chain Reaction-Amplifiable DNA from Soils.
Applied and environmental microbiology 1993 , 59(6):1972-1974. PubMed Abstract | PubMed Central Full Text | BioMed
31. Tichopad A, Polster J, Pecen L, Pfaffl MW: Model of inhibition of Thermus aquaticus polymerase and Moloney murine leukemia virus reverse transcriptase by tea polyphenols (+)-catechin and (-)
J Ethnopharmacol 2005 , 99(2):221-227. PubMed Abstract | Publisher Full Text | BioMed
32. Nolan T, Hands RE, Ogunkolade W, Bustin SA: SPUD: a quantitative PCR assay for the detection of inhibitors in nucleic acid preparations.
Anal Biochem 2006 , 351(2):308-310. PubMed Abstract | Publisher Full Text | BioMed
33. Murphy J, Bustin SA: Reliability of real-time reverse-transcription PCR in clinical diagnostics: gold standard or substandard?
Expert review of molecular diagnostics 2009 , 9(2):187-197. PubMed Abstract | Publisher Full Text | BioMed
34. Chandler DP, Wagnon CA, Bolton H Jr: Reverse transcriptase (RT) inhibition of PCR at low concentrations of template and its implications for quantitative RT-PCR.
Applied and environmental microbiology 1998 , 64(2):669-677. PubMed Abstract | PubMed Central Full Text | BioMed
35. Kubista M, Stahlberg A, Bar T: Light-up probe based real-time Q-PCR.
In Genomics and Proteomics Technologies Proceedings of SPIE Edited by: TW Raghavachari R. 2001 , 53-58. BioMed
36. Karsai A, Muller S, Platz S, Hauser MT: Evaluation of a homemade SYBR green I reaction mixture for real-time PCR quantification of gene expression.
Bio Techniques 2002 , 32(4):790-792. BioMed
37. Tichopad A, Dzidic A, Pfaffl MW: Improving quantitative real-time RT-PCR reproducibility by boosting primer-linked amplification efficiency.
Biotechnology Letters 2002 , 24:2053-2056. Publisher Full Text | BioMed
38. Rosenstraus M, Wang Z, Chang SY, DeBonville D, Spadoro JP: An internal control for routine diagnostic PCR: design, properties, and effect on clinical performance.
Journal of clinical microbiology 1998 , 36(1):191-197. PubMed Abstract | PubMed Central Full Text | BioMed
39. Rutledge RG, Stewart D: Critical evaluation of methods used to determine amplification efficiency refutes the exponential character of real-time PCR.
BMC molecular biology 2008 , 9:96. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text | BioMed
40. Skern R, Frost P, Nilsen F: Relative transcript quantification by quantitative PCR: roughly right or precisely wrong?
BMC molecular biology 2005 , 6(1):10. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text | BioMed
|
{"url":"https://cy0method.org/publications/shape-based-kinetic-outlier-detection-in-RT-PCR.php","timestamp":"2024-11-09T23:07:28Z","content_type":"text/html","content_length":"149316","record_id":"<urn:uuid:0eadf632-dfda-45f8-b053-01c0e3cdc132>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00333.warc.gz"}
|
Unit Of Electric Power MDCAT MCQs with Answers - Youth For Pakistan
Welcome to the Unit Of Electric Power MDCAT MCQs with Answers. In this post, we have shared Unit Of Electric Power Multiple Choice Questions and Answers for PMC MDCAT 2024. Each question in MDCAT
Physics offers a chance to enhance your knowledge regarding Unit Of Electric Power MCQs in this MDCAT Online Test.
Unit Of Electric Power MDCAT MCQs Test Preparations
The SI unit of electric power is:
a) Watt
b) Volt
c) Ampere
d) Ohm
1 kilowatt is equal to:
a) 100 watts
b) 1,000 watts
c) 10 watts
d) 100,000 watts
Power can be calculated using which formula?
a) Power = Voltage × Current
b) Power = Voltage ÷ Current
c) Power = Current ÷ Voltage
d) Power = Voltage + Current
a) Power = Voltage × Current
1 megawatt is equivalent to:
a) 1,000 watts
b) 10,000 watts
c) 100,000 watts
d) 1,000,000 watts
d) 1,000,000 watts
Which unit is used to measure electrical energy consumed over time?
a) Watt-hour
b) Ampere-hour
c) Volt-hour
d) Ohm-hour
The power factor of an electrical system is:
a) The ratio of voltage to current
b) The ratio of real power to apparent power
c) The product of voltage and current
d) The square of current
b) The ratio of real power to apparent power
Which of the following is a unit of power?
a) Joule
b) Tesla
c) Watt
d) Coulomb
1 horsepower is equal to approximately:
a) 500 watts
b) 750 watts
c) 1,000 watts
d) 1,200 watts
The term “apparent power” is measured in:
a) Watts
b) Volt-amperes
c) Joules
d) Ohms
If a device consumes 100 watts of power for 1 hour, the energy consumed is:
a) 100 watt-hours
b) 1 watt-hour
c) 10 watt-hours
d) 100 joules
Which unit measures electrical power in an AC circuit?
a) Ohm
b) Volt
c) Watt
d) Ampere
The unit used to express the power of an electrical appliance is:
a) Joule
b) Watt
c) Volt
d) Ampere
Power in an electrical circuit is given by which product?
a) Current and Resistance
b) Voltage and Resistance
c) Voltage and Current
d) Voltage and Power Factor
c) Voltage and Current
1 megawatt-hour is equivalent to:
a) 1,000 watt-hours
b) 10,000 watt-hours
c) 100,000 watt-hours
d) 1,000,000 watt-hours
d) 1,000,000 watt-hours
The term “reactive power” is measured in:
a) Watts
b) Volt-amperes reactive (VAR)
c) Joules
d) Ohms
b) Volt-amperes reactive (VAR)
Electrical power is the rate at which:
a) Voltage changes
b) Energy is used
c) Current flows
d) Resistance varies
Which of the following is not a unit of power?
a) Watt
b) Kilowatt
c) Ampere
d) Horsepower
The power consumed by an appliance can be calculated if you know its:
a) Voltage and Resistance
b) Current and Resistance
c) Voltage and Current
d) Voltage and Frequency
c) Voltage and Current
What does a watt measure in electrical terms?
a) Energy per unit time
b) Voltage per unit current
c) Electrical resistance
d) Current per unit voltage
a) Energy per unit time
1,000 watts is also known as:
a) 1 kilowatt
b) 1 megawatt
c) 1 milliwatt
d) 1 gigawatt
Which term describes the real power consumed in an AC circuit?
a) Apparent Power
b) Reactive Power
c) True Power
d) Active Power
The power consumed by a device that runs for 2 hours at 500 watts is:
a) 1,000 watt-hours
b) 500 watt-hours
c) 2,000 watt-hours
d) 1,500 watt-hours
a) 1,000 watt-hours
1 kilowatt-hour is the energy consumed when:
a) 1 kilowatt power is used for 1 hour
b) 1 watt power is used for 1 hour
c) 1 kilowatt power is used for 1 minute
d) 100 watts power is used for 10 hours
a) 1 kilowatt power is used for 1 hour
The power rating of a device tells you:
a) Its electrical efficiency
b) How much current it uses
c) The amount of energy it consumes per unit time
d) The voltage across it
c) The amount of energy it consumes per unit time
If a device uses 2,000 watts for 3 hours, the total energy consumed is:
a) 2,000 watt-hours
b) 6,000 watt-hours
c) 1,000 watt-hours
d) 600 watt-hours
b) 6,000 watt-hours
Power in an AC circuit with a power factor of 1 is:
a) True Power
b) Reactive Power
c) Apparent Power
d) Zero Power
The unit “VA” stands for:
a) Volt-amperes
b) Volt-amps
c) Voltage-amperes
d) Variable-amperes
What does the term “power factor” refer to in electrical systems?
a) Efficiency of the device
b) Ratio of reactive power to real power
c) Ratio of real power to apparent power
d) Electrical load’s resistance
c) Ratio of real power to apparent power
Electrical power is measured in terms of:
a) Energy per unit of time
b) Voltage per unit of current
c) Current per unit of voltage
d) Resistance per unit of time
a) Energy per unit of time
If an appliance runs on 220 volts and 5 amperes, its power consumption is:
a) 220 watts
b) 1,100 watts
c) 2200 watts
d) 110 watts
In a purely resistive circuit, the power factor is:
a) Zero
b) One
c) Half
d) Variable
The term “kilowatt” is commonly used to express:
a) Electrical resistance
b) Electrical current
c) Electrical power
d) Electrical voltage
c) Electrical power
The total power in an AC circuit with a power factor of 0.8 is known as:
a) True Power
b) Reactive Power
c) Apparent Power
d) Complex Power
What is the formula for calculating electrical power in a resistive circuit?
a) P = V² / R
b) P = I² × R
c) Both a and b
d) P = V × I
Power measured in “kVA” stands for:
a) Kilovolt-amperes
b) Kilowatt-amperes
c) Kilowatt-hours
d) Kilovolt-hours
a) Kilovolt-amperes
The unit “Watt” is named after:
a) James Watt
b) Michael Faraday
c) Alessandro Volta
d) Georg Simon Ohm
1 megawatt is equal to:
a) 1,000 watts
b) 10,000 watts
c) 100,000 watts
d) 1,000,000 watts
d) 1,000,000 watts
Which measurement indicates the rate at which work is done or energy is transferred?
a) Voltage
b) Power
c) Current
d) Resistance
The power factor of a purely inductive circuit is:
a) Zero
b) One
c) Negative
d) Indeterminate
1 watt is defined as:
a) 1 joule per second
b) 1 volt per ampere
c) 1 ampere per volt
d) 1 joule per minute
a) 1 joule per second
If you are interested to enhance your knowledge regarding Physics, Chemistry, Computer, and Biology please click on the link of each category, you will be redirected to dedicated website for each
Was this article helpful?
|
{"url":"https://youthforpakistan.org/unit-of-electric-power-mdcat-mcqs/","timestamp":"2024-11-03T20:01:28Z","content_type":"text/html","content_length":"240326","record_id":"<urn:uuid:f60d4e41-6644-487a-bf39-8927bae5203a>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00092.warc.gz"}
|
Subtraction to 30 using simplification
Students learn that they can split a subtraction problem into tens and ones and use the subtraction problem with ones as a first step to subtract with tens.
It is important to learn this to help make subtraction easier and faster to do.
Start with a few counting exercises on the interactive whiteboard to 30. Have students count quantities and then explain their strategies for counting. It is possible to skip count in 2s, 3s, 5s or
one by one. Then decompose the numbers to 30 into tens and ones. An example is splitting 23 into 20 and 3.
Show students a set of subtraction problems that use simplification. Ask students if they notice anything about the subtraction problems. They should notice that the difference between the two
problems is 10. Namely that 10 is added to the first number (minuend) and to the difference. These subtraction problems are related. Tell students that you can sometimes solve a subtraction problem
with tens faster if you first solve the subtraction problem without the tens. You can use that difference as a first step. If you know 6 - 2 = 4, then you can quickly determine that 16 - 2 = 14, and
26 - 2 = 24. Practice this with MAB blocks on the interactive whiteboard. Drag the blocks that you take away to the trash can, so you can solve the subtraction problem. Ask students how many blocks
are left. Emphasize that they don't need to recalculate the subtraction problem, because they can use the first step they already solved. 10 is added to the subtraction problem, so 10 is also added
to the difference. Check if students are able to solve a subtraction problem without blocks on the interactive whiteboard as well by using the example of colored pencils. First with 9 - 4, then with
19 - 4, and then with 29 - 4. Do a few more subtraction problems with the students, but without visual support. They can imagine the MAB blocks in their head to help imagine the ten that is added to
the second step of the subtraction problem.Check that students understand subtraction to 30 using simplification by asking the following questions:- 7 - 2 = 5, That means 17 - 2 = ?- 14 - 3 = 11. 24
- 3 = 32. How do you know this is true?- 6 - 3 = 3. Which subtraction problem(s) can you now easily solve?
Students first are given subtraction problems in which the subtraction problem with ones is given and they must solve with tens. Then they must solve the subtraction problems with tens and ones.
Finally they are given subtraction problems from 20 or 30 regrouping tens.
Repeat why it is important to learn how to subtract using simplification. It makes subtraction easier and faster. If you know the subtraction problem of the ones, you can easily solve the subtraction
problem with the tens. Check that students have understood by doing two subtraction problems as a class, one with visual support and one without.
Students who have difficulty with this can make use of the rekenrek. For example- show them 7 - 3, and then 17 - 3. Repeat this with other numbers. Emphasize that they don't need to recalculate the
difference, but that this is a first step to solving the subtraction problem with tens.
Gynzy is an online teaching platform for interactive whiteboards and displays in schools.
With a focus on elementary education, Gynzy’s Whiteboard, digital tools, and activities make it easy for teachers to save time building lessons, increase student engagement, and make classroom
management more efficient.
|
{"url":"https://www.gynzy.com/en-us/library/items/subtraction-to-30-using-simplification","timestamp":"2024-11-02T21:46:02Z","content_type":"text/html","content_length":"554148","record_id":"<urn:uuid:5108f8aa-17dd-458b-b24f-edd31cc168a5>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00391.warc.gz"}
|
How to do Merge Sort in Python
Merge sort is one of the most efficient and reliable sorting algorithms, known for its divide-and-conquer approach. It is particularly useful for sorting large datasets due to its guaranteed O(n log
n) time complexity, making it a popular choice in many applications. This blog post walks you through the implementation of merge sort in Python, provides a detailed visual explanation, and explores
variations like bottom-up, in-place, parallel, and natural merge sort.
What is Merge Sort?
Merge sort is a divide-and-conquer sorting algorithm that recursively divides an array into smaller sub-arrays until each contains a single element. These sub-arrays are then merged back together in
sorted order.
How Merge Sort Works
1. Divide the array into two halves recursively until you have sub-arrays with one element.
2. Merge the sub-arrays by comparing elements and combining them in sorted order.
3. Repeat until all sub-arrays are merged and the entire array is sorted.
Time Complexity
• Best Case: O(n log n) — The array is split and merged efficiently.
• Worst Case: O(n log n) — Performance remains consistent regardless of input order.
Space Complexity
• Space Complexity: O(n), due to the need for temporary arrays to store subarrays during the merge process.
• Merge Sort also uses O(log n) space for the recursion stack, but the auxiliary space required for merging dominates, making the total space complexity O(n).
Python Implementation of Merge Sort
Here’s a straightforward implementation of merge sort in Python:
def merge_sort(arr):
if len(arr) > 1:
mid = len(arr) // 2 # Find the middle of the array
left_half = arr[:mid] # Split into two halves
right_half = arr[mid:]
# Recursively sort both halves
i = j = k = 0
# Merge the sorted halves
while i < len(left_half) and j < len(right_half):
if left_half[i] < right_half[j]:
arr[k] = left_half[i]
i += 1
arr[k] = right_half[j]
j += 1
k += 1
# Check for remaining elements
while i < len(left_half):
arr[k] = left_half[i]
i += 1
k += 1
while j < len(right_half):
arr[k] = right_half[j]
j += 1
k += 1
# Example usage:
data = [12, 11, 13, 5, 6, 7]
print("Sorted array:", data)
Sorted array: [5, 6, 7, 11, 12, 13]
Diagram Explanation of Merge Sort:
To understand merge sort visually, let’s take the example of sorting the array [12, 11, 13, 5, 6, 7]:
Step 1: [12, 11, 13, 5, 6, 7] (Original array)
Step 2: Split into two halves:
Left: [12, 11, 13]
Right: [5, 6, 7]
Step 3: Recursively split each half:
Left split: [12] | [11, 13] => [12] | [11] | [13]
Right split: [5] | [6, 7] => [5] | [6] | [7]
Step 4: Merge sub-arrays:
Left merge: [11, 12, 13]
Right merge: [5, 6, 7]
Step 5: Merge the left and right halves:
[5, 6, 7, 11, 12, 13]
Variations of Merge Sort
There are several interesting variations of merge sort, each optimizing different aspects of the algorithm for specific use cases. Let’s explore some of the most popular variations:
1. Bottom-Up Merge Sort
Bottom-up merge sort avoids recursion and iteratively merges sub-arrays of increasing size. It is useful when recursion overhead is a concern.
Bottom-Up Merge Sort Code
def bottom_up_merge_sort(arr):
width = 1
n = len(arr)
while width < n:
for i in range(0, n, 2 * width):
left = arr[i:i+width]
right = arr[i+width:i+2*width]
arr[i:i+2*width] = merge(left, right)
width *= 2
def merge(left, right):
result = []
i = j = 0
while i < len(left) and j < len(right):
if left[i] < right[j]:
i += 1
j += 1
return result
# Example usage:
data = [12, 11, 13, 5, 6, 7]
print("Sorted array (Bottom-Up):", data)
Sorted array (Bottom-Up): [5, 6, 7, 11, 12, 13]
2. Natural Merge Sort
Natural merge sort leverages the fact that data often contains already sorted sequences (runs). It identifies these runs and merges them, making it more efficient for datasets with naturally ordered
Natural Merge Sort Code:
def natural_merge_sort(arr):
runs = []
new_run = [arr[0]]
# Identify runs (naturally ordered subsequences)
for i in range(1, len(arr)):
if arr[i] >= arr[i - 1]:
new_run = [arr[i]]
# Iteratively merge runs
while len(runs) > 1:
new_runs = []
for i in range(0, len(runs), 2):
if i + 1 < len(runs):
new_runs.append(merge(runs[i], runs[i + 1]))
runs = new_runs
return runs[0]
def merge(left, right):
result = []
i = j = 0
while i < len(left) and j < len(right):
if left[i] < right[j]:
i += 1
j += 1
return result
# Example usage:
data = [12, 11, 13, 5, 6, 7, 14, 15, 1, 2, 3]
sorted_data = natural_merge_sort(data)
print("Sorted array (Natural):", sorted_data)
Sorted array (Natural): [1, 2, 3, 5, 6, 7, 11, 12, 13, 14, 15]
3. Parallel Merge Sort
With the rise of multicore processors, parallel merge sort can leverage multiple threads to improve performance. The array is split and sorted concurrently across multiple cores, and the sorted
sub-arrays are then merged in parallel. This can drastically reduce the sorting time for vast datasets.
4. In-Place Merge Sort
Standard merge sort requires O(n) extra space for merging, but in-place merge sort reduces this space complexity to O(1) by sorting the array in place. The implementation is more complex and may not
perform as efficiently as the standard merge sort, but it conserves memory.
In-Place Merge Sort Code:
def merge_in_place(arr, start, mid, end):
start2 = mid + 1
# If already sorted, return
if arr[mid] <= arr[start2]:
while start <= mid and start2 <= end:
if arr[start] <= arr[start2]:
start += 1
value = arr[start2]
index = start2
while index != start:
arr[index] = arr[index - 1]
index -= 1
arr[start] = value
start += 1
mid += 1
start2 += 1
def in_place_merge_sort(arr, l, r):
if l < r:
mid = l + (r - l) // 2
in_place_merge_sort(arr, l, mid)
in_place_merge_sort(arr, mid + 1, r)
merge_in_place(arr, l, mid, r)
# Example usage:
data = [12, 11, 13, 5, 6, 7]
in_place_merge_sort(data, 0, len(data) - 1)
print("Sorted array (In-Place):", data)
Sorted array (In-Place): [5, 6, 7, 11, 12, 13]
Limitations of Merge Sort
While merge sort is highly efficient, it does have certain limitations:
1. Requires Additional Memory: Merge sort needs O(n) additional space for auxiliary arrays. This can be an issue for large datasets when in-place sorting is necessary.
2. Not Ideal for Small Arrays: For small datasets, simpler algorithms like insertion sort can outperform merge sort due to lower overhead.
3. Slower in Practice: Even though merge sort has a better time complexity than algorithms like bubble sort, in practice, quick sort often outperforms it due to lower constant factors.
How Merge Sort is Used in TimSort
Merge Sort is a crucial component of TimSort, handling the merging phase after Insertion Sort is applied to small runs. TimSort leverages Merge Sort to efficiently combine these sorted runs into a
single, fully sorted array. Here’s how Merge Sort fits into TimSort:
1. Merging Sorted Runs: After Insertion Sort is used to sort small segments of the data (runs), TimSort merges these runs using the classical Merge Sort technique. Since the runs are already sorted,
Merge Sort can efficiently combine them without re-sorting individual elements, maintaining TimSort’s stability and adaptability for partially sorted data.
2. Stable Sorting: One of the main benefits of using Merge Sort in TimSort is its stability—it maintains the relative order of equal elements. This is essential for real-world datasets where the
order of similar items is often significant, such as sorting by multiple criteria.
3. Divide and Conquer: TimSort leverages the divide-and-conquer nature of Merge Sort. Once Insertion Sort has sorted the smaller runs, Merge Sort recursively combines them into larger runs. This
merging process happens in linear time, ensuring that the overall time complexity remains O(n log n).
In essence, Merge Sort allows TimSort to efficiently combine multiple sorted sections into a final, fully sorted array while retaining stability and minimizing additional operations. This combination
of Insertion Sort and Merge Sort makes TimSort particularly well-suited for handling large, real-world datasets.
Merge sort is a powerful, stable, and reliable sorting algorithm with consistent O(n log n) performance. In this post, we explored various implementations of merge sort, including bottom-up, parallel
, in-place, and natural merge sort, each suited to different scenarios. Merge sort is a great choice for sorting large datasets, especially when stability and guaranteed performance are needed.
However, be mindful of the memory overhead and consider using other variations if necessary.
Congratulations on reading to the end of this tutorial!
To implement Merge Sort in C++, go to the article How To Do Merge Sort in C++.
To implement Merge Sort in Rust, go to the article How To Do Merge Sort in Rust.
To implement Merge Sort in Java, go to the article How To Do Merge Sort in Java.
For further reading on sorting algorithms in Python, go to the articles:
Go to the online courses page on Python to learn more about Python for data science and machine learning.
Have fun and happy researching!
Suf is a senior advisor in data science with deep expertise in Natural Language Processing, Complex Networks, and Anomaly Detection. Formerly a postdoctoral research fellow, he applied advanced
physics techniques to tackle real-world, data-heavy industry challenges. Before that, he was a particle physicist at the ATLAS Experiment of the Large Hadron Collider. Now, he’s focused on bringing
more fun and curiosity to the world of science and research online.
|
{"url":"https://researchdatapod.com/python-merge-sort/","timestamp":"2024-11-13T19:20:36Z","content_type":"text/html","content_length":"121165","record_id":"<urn:uuid:6fcaf74c-c46f-41e1-bb39-de4732d91903>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00179.warc.gz"}
|
How do you solve simultaneous equations graphically?
How do you solve simultaneous equations graphically?
Knowledge of plotting linear and quadratic graphs is needed to solve equations graphically. To find solutions from graphs, look for the point where the two graphs cross one another. This is the
solution point. For example, the solution for the graphs y = x + 1 and x + y = 3 is the coordinate point (1, 2).
How do you solve equations graphically?
To solve an equation graphically, draw the graph for each side, member, of the equation and see where the curves cross, are equal. The x values of these points, are the solutions to the equation.
How many solutions are there if graph of two simultaneous equations intersect each other in one point?
The coordinates of the intersection point of the lines is the solution to the simultaneous linear equations describing the lines. So we would normally expect a pair of simultaneous equations to have
just one solution. Hence, the solution of the simultaneous equations is x = 2, y =1.
How do you solve simultaneous equations with 3 variables?
Here, in step format, is how to solve a system with three equations and three variables:
1. Pick any two pairs of equations from the system.
2. Eliminate the same variable from each pair using the Addition/Subtraction method.
3. Solve the system of the two new equations using the Addition/Subtraction method.
How do you solve equations algebraically?
Use elimination to solve for the common solution in the two equations: x + 3y = 4 and 2x + 5y = 5. x= –5, y= 3. Multiply each term in the first equation by –2 (you get –2x – 6y = –8) and then add the
terms in the two equations together. Now solve –y = –3 for y, and you get y = 3.
How do you solve 3 equations quickly?
Pick any two pairs of equations from the system. Eliminate the same variable from each pair using the Addition/Subtraction method. Solve the system of the two new equations using the Addition/
Subtraction method. Substitute the solution back into one of the original equations and solve for the third variable.
What is the formula for simultaneous equation?
This is a process which involves removing or eliminating one of the unknowns to leave a single equation which involves the other unknown. The method is best illustrated by example. Example Solve the
simultaneous equations 3x + 2y = 36 (1) 5x + 4y = 64 (2) . y = 6 Hence the full solution is x = 8, y = 6.
How many solutions do 2 parallel lines have?
When the lines are parallel, there are no solutions, and sometimes the two equations will graph as the same line, in which case we have an infinite number of solutions. Some special terms are
sometimes used to describe these kinds of systems.
How do you know if an equation has no solution?
The coefficients are the numbers alongside the variables. The constants are the numbers alone with no variables. If the coefficients are the same on both sides then the sides will not equal,
therefore no solutions will occur. Use distributive property on the right side first.
How to solve simultaneous equations with a graph?
It is possible to solve linear simultaneous equations with graphs by finding where they intersect. Example: By plotting their graphs, for values of. x. x x between. − 1. -1 −1 and. 4. 4 4, on the
same axes, find the solution to the two simultaneous equations below. y = 2 x − 5 y = − x + 4.
Which is the best definition of a simultaneous equation?
Simultaneous equations are multiple equations that share the same variables and which are all true at the same time. When an equation has 2 2 variables its much harder to solve, however, if you have
2 2 equations both with
What do you need to know about solving equations graphically?
Knowledge of plotting linear and quadratic graphs is needed to solve equations graphically. To find solutions from graphs, look for the point where the two graphs cross one another. This is the
solution point. For example, the solution for the graphs (y = x + 1) and (x + y = 3) is the coordinate point (1, 2).
How to write simultaneous equations in a spreadsheet?
Step 1: Write one equation above the other. ax+by=c ax + by = c, so rearrange if needed. Step 2: Get the coefficients to match Step 3: Add or subtract the equations to eliminate terms with equal
coefficients. +20x +20x we must subtract the equations. Step 4: Solve the resulting equation
|
{"url":"https://draftlessig.org/how-do-you-solve-simultaneous-equations-graphically/","timestamp":"2024-11-05T10:41:58Z","content_type":"text/html","content_length":"70954","record_id":"<urn:uuid:a04061d8-e01a-4b66-a6df-637c64be0134>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00083.warc.gz"}
|
157 research outputs found
When axion stars fly through an astrophysical magnetic background, the axion-to-photon conversion may generate a large electromagnetic radiation power. After including the interference effects of the
spacially-extended axion-star source and the macroscopic medium effects, we estimate the radiation power when an axion star meets a neutron star. For a dense axion star with $10^{-13}\,M_\odot$, the
radiated power is at the order of 10^{11}\,\mbox{W}\times(100\,\mu\mbox{eV}/m_a)^4\,(B/10^{10}\,\mbox{Gauss})^2 with $m_a$ as the axion particle mass and $B$ the strength of the neutron star magnetic
field. For axion stars occupy a large fraction of dark matter energy density, this encounter event with a transient \mathcal{O}(0.1\,\mbox{s}) radio signal may happen in our galaxy with the averaged
source distance of one kiloparsec. The predicted spectral flux density is at the order of $\mu$Jy for a neutron star with $B\sim 10^{13}$ Gauss. The existing Arecibo, GBT, JVLA and FAST and the
ongoing SKA radio telescopes have excellent discovery potential of dense axion stars.Comment: 16 pages, 2 figure
We study asymptotic safety of models of the higher derivative quantum gravity with and without matter. The beta functions are derived by utilizing the functional renormalization group, and
non-trivial fixed points are found. It turns out that all couplings in gravity sector, namely the cosmological constant, the Newton constant, and the $R^2$ and $R_{\muu}^2$ coupling constants, are
relevant in case of higher derivative pure gravity. For the Higgs-Yukawa model non-minimal coupled with higher derivative gravity, we find a stable fixed point at which the scalar-quartic and the
Yukawa coupling constants become relevant. The relevant Yukawa coupling is crucial to realize the finite value of the Yukawa coupling constants in the standard model.Comment: Version published in
JHEP; 75 pages, 10 figures, typos corrected, references adde
We study higher dimensional models with magnetic fluxes, which can be derived from superstring theory. We study mass spectrum and wavefunctions of massless and massive modes for spinor, scalar and
vector fields. We compute the 3-point couplings and higher order couplings among massless modes and massive modes in 4D low-energy effective field theory. These couplings have non-trivial behaviors,
because wavefunctions of massless and massive modes are non-trivial.Comment: 21 page
The requirement for an ultraviolet completable theory to be well-behaved upon compactification has been suggested as a guiding principle for distinguishing the landscape from the swampland. Motivated
by the weak gravity conjecture and the multiple point principle, we investigate the vacuum structure of the standard model compactified on $S^1$ and $T^2$. The measured value of the Higgs mass
implies, in addition to the electroweak vacuum, the existence of a new vacuum where the Higgs field value is around the Planck scale. We explore two- and three-dimensional critical points of the
moduli potential arising from compactifications of the electroweak vacuum as well as this high scale vacuum, in the presence of Majorana/Dirac neutrinos and/or axions. We point out potential sources
of instability for these lower dimensional critical points in the standard model landscape. We also point out that a high scale $AdS_4$ vacuum of the Standard Model, if exists, would be at odd with
the conjecture that all non-supersymmetric $AdS$ vacua are unstable. We argue that, if we require a degeneracy between three- and four-dimensional vacua as suggested by the multiple point principle,
the neutrinos are predicted to be Dirac, with the mass of the lightest neutrino O(1-10) meV, which may be tested by future CMB, large scale structure and $21$cm line observations.Comment: 56 pages,
22 figures, published versio
We propose a novel leptogenesis scenario at the reheating era. Our setup is minimal in the sense that, in addition to the standard model Lagrangian, we only consider an inflaton and higher
dimensional operators. The lepton number asymmetry is produced not by the decay of a heavy particle, but by the scattering between the standard model particles. After the decay of an inflaton, the
model is described within the standard model with higher dimensional operators. The Sakharov's three conditions are satisfied by the following way. The violation of the lepton number is realized by
the dimension-5 operator. The complex phase comes from the dimension-6 four lepton operator. The universe is out of equilibrium before the reheating is completed. It is found that the successful
baryogenesis is realized for the wide range of parameters, the inflaton mass and reheating temperature, depending on the cutoff scale. Since we only rely on the effective Lagrangian, our scenario can
be applicable to all mechanisms to generate neutrino Majorana masses.Comment: 5 pages, 3 figures; published version(v2
It is known that soft photon and graviton theorems can be regarded as the Ward-Takahashi identities of asymptotic symmetries. In this paper, we consider soft theorem for pions, i.e., Nambu-Goldstone
bosons associated with a spontaneously broken axial symmetry. The soft pion theorem is written as the Ward-Takahashi identities of the $S$-matrix under asymptotic transformations. We investigate the
asymptotic dynamics, and find that the conservation of charges generating the asymptotic transformations can be interpreted as a pion memory effect.Comment: 25 pages, 2 figures, v2: references and
discussions adde
|
{"url":"https://core.ac.uk/search/?q=author%3A(Hamada%2C%20Yuta)","timestamp":"2024-11-05T18:28:47Z","content_type":"text/html","content_length":"114302","record_id":"<urn:uuid:160684ad-5cea-4f76-82b0-89b5f3123f33>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00073.warc.gz"}
|
Completing Incomplete Tables - PSAT Math
All PSAT Math Resources
Example Questions
Example Question #1 : Use Data To Construct And Interpret A Two Way Table: Ccss.Math.Content.8.Sp.A.4
A teacher at a high school conducted a survey of seniors and found that
Correct answer:
To help answer this question, we can construct a two-way table and fill in our known quantities from the question.
The columns of the table will represent the students who have a laptop or do not have a laptop and the rows will contain the students who have a car or do not have a car. The first bit of information
that we were given from the question was that
Our question asked how many students have a laptop, but do not own have a car. We can take the total number of students that own a lap top,
This means that
Example Question #2 : Use Data To Construct And Interpret A Two Way Table: Ccss.Math.Content.8.Sp.A.4
A teacher at a high school conducted a survey of seniors and found that
Correct answer:
To help answer this question, we can construct a two-way table and fill in our known quantities from the question.
The columns of the table will represent the students who have a laptop or do not have a laptop and the rows will contain the students who have a car or do not have a car. The first bit of information
that we were given from the question was that
Our question asked how many students do not have a laptop. We add up the numbers in the "no laptop" column to get the total:
This means that
Example Question #2 : Use Data To Construct And Interpret A Two Way Table: Ccss.Math.Content.8.Sp.A.4
A teacher at a high school conducted a survey of seniors and found that
Correct answer:
To help answer this question, we can construct a two-way table and fill in our known quantities from the question.
The columns of the table will represent the students who have a laptop or do not have a laptop and the rows will contain the students who have a car or do not have a car. The first bit of information
that we were given from the question was that
Our question asked how many students have a car. We add up the numbers in the "car" row to get the total:
This means that
Example Question #3 : Use Data To Construct And Interpret A Two Way Table: Ccss.Math.Content.8.Sp.A.4
A teacher at a high school conducted a survey of seniors and found that
Correct answer:
To help answer this question, we can construct a two-way table and fill in our known quantities from the question.
The columns of the table will represent the students who have a laptop or do not have a laptop and the rows will contain the students who have a car or do not have a car. The first bit of information
that we were given from the question was that
Our question asked how many students do not have a car. We add up the numbers in the "no car" row to get the total, but first we need to fill in a gap in our table, students who have a laptop, but
don't have a car:
We can take the total number of students that own a lap top,
This means that
Now, we add up the numbers in the "no car" row to get the total:
This means that
Example Question #3 : Use Data To Construct And Interpret A Two Way Table: Ccss.Math.Content.8.Sp.A.4
A teacher at a high school conducted a survey of freshman and found that
Correct answer:
To help answer this question, we can construct a two-way table and fill in our known quantities from the question.
The columns of the table will represent the students who have a curfew or do not have a curfew and the rows will contain the students who are on honor roll or are not on honor roll. The first bit of
information that we were given from the question was that
Our question asked how many students have a curfew, but were not on honor roll. We can take the total number of students that have a curfew,
This means that
Example Question #3 : Use Data To Construct And Interpret A Two Way Table: Ccss.Math.Content.8.Sp.A.4
A teacher at a high school conducted a survey of freshman and found that
Correct answer:
To help answer this question, we can construct a two-way table and fill in our known quantities from the question.
The columns of the table will represent the students who have a curfew or do not have a curfew and the rows will contain the students who are on honor roll or are not on honor roll. The first bit of
information that we were given from the question was that
Our question asked how many students did not have a curfew. We add up the numbers in the "no curfew" column to get the total:
This means that
Example Question #4 : Completing Incomplete Tables
A teacher at a high school conducted a survey of freshman and found that
Correct answer:
To help answer this question, we can construct a two-way table and fill in our known quantities from the question.
The columns of the table will represent the students who have a curfew or do not have a curfew and the rows will contain the students who are on honor roll or are not on honor roll. The first bit of
information that we were given from the question was that
Our question asked how many students were on honor roll. We add up the numbers in the "honor roll" row to get the total:
This means that
Example Question #6 : Use Data To Construct And Interpret A Two Way Table: Ccss.Math.Content.8.Sp.A.4
A middle school teacher conducted a survey of the
Correct answer:
To help answer this question, we can construct a two-way table and fill in our known quantities from the question.
The columns of the table will represent the students who are athletes or are not athletes and the rows will contain the students who drink soda or do not drink soda. The first bit of information that
we were given from the question was that
Our question asked how many students are athletes, but don't drink soda. We can take the total number of students who are athletes,
This means that
Example Question #1 : Completing Incomplete Tables
A middle school teacher conducted a survey of the
Correct answer:
To help answer this question, we can construct a two-way table and fill in our known quantities from the question.
The columns of the table will represent the students who are athletes or are not athletes and the rows will contain the students who drink soda or do not drink soda. The first bit of information that
we were given from the question was that
Our question asked how many students were not athletes. We add up the numbers in the "not an athlete" column to get the total:
This means that
Example Question #2182 : Psat Mathematics
A middle school teacher conducted a survey of the
Correct answer:
To help answer this question, we can construct a two-way table and fill in our known quantities from the question.
The columns of the table will represent the students who are athletes or are not athletes and the rows will contain the students who drink soda or do not drink soda. The first bit of information that
we were given from the question was that
Our question asked how many students drink soda. We add up the numbers in the "drinks soda" row to get the total:
This means that
Certified Tutor
University of Massachusetts Amherst, Bachelor in Arts, Political Science and Government. University of Massachusetts Amherst,...
Certified Tutor
University of Sherbrooke, Doctor of Philosophy, Mathematics. University of Manitoba, Master of Science, Mathematics.
|
{"url":"https://www.varsitytutors.com/psat_math-help/completing-incomplete-tables","timestamp":"2024-11-11T13:35:31Z","content_type":"application/xhtml+xml","content_length":"199951","record_id":"<urn:uuid:946bff05-0738-4c45-819b-53f21fdddd92>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00527.warc.gz"}
|
Contents Previous Next Index
1 Introduction to Programming in Maple
Maple provides an interactive problem-solving environment, complete with procedures for performing symbolic, numeric, and graphical computations. At the core of the Maple computer algebra system is
a powerful programming language, upon which the Maple libraries of mathematical commands are built.
1.1 In This Chapter
• Components of the Maple software
• Procedures and other essential elements of the Maple language
1.2 The Maple Software
The Maple software consists of two distinct parts.
The User Interface
You can use the Maple user interface to enter, manipulate, and analyze mathematical expressions and commands. The user interface communicates with the Maple computation engine to solve
mathematical problems and display their solutions.
For more information about the Maple user interface, refer to the Maple User Manual.
The Computation Engine
The Maple computation engine is the command processor, which consists of two parts: the kernel and math library.
The kernel is the core of the Maple computation engine. It contains the essential facilities required to run and interpret Maple programs, and manage data structures. In this guide, the kernel
commands are referred to as built-in commands.
The Maple kernel also consists of kernel extensions, which are collections of external compiled libraries that are included in Maple to provide low-level programming functionality. These libraries
include Basic Linear Algebra Subprograms (BLAS), GNU Multiple Precision (GMP), the NAG® C Library, and the C Linear Algebra PACKage (CLAPACK).
The math library contains most of the Maple commands. It includes functionality for numerous mathematical domains, including calculus, linear algebra, number theory, and combinatorics. Also, it
contains commands for numerous other tasks, including importing data into Maple, XML processing, graphics, and translating Maple code to other programming languages.
All library commands are implemented in the high-level Maple programming language, so they can be viewed and modified by users. By learning the Maple programming language, you can create custom
programs and packages, and extend the Maple library.
1.3 Maple Statements
There are many types of valid statements. Examples include statements that request help on a particular topic, display a text string, perform an arithmetic operation, use a Maple library command,
or define a procedure.
Statements in 1-D notation require a trailing semicolon (;) or colon (:). If you enter a statement with a trailing semicolon, for most statements, the result is displayed. If you enter a statement
with a trailing colon, the result is computed but not displayed.
For more information about statements in Maple, see Maple Statements.
Getting Help
To view a help page for a particular topic, enter a question mark (?) followed by the corresponding topic name. For example, ?procedure displays a help page that describes how to write a Maple
For more information about getting help in Maple, refer to the help and HelpGuide help pages.
This type of Maple statement does not have a trailing colon or semicolon.
Displaying a Text String
The following statement returns a string. The text that forms the string is enclosed in double quotes, and the result (the string itself) is displayed because the statement has a trailing
Normally, you would create a string as part of another statement, such as an assignment or an argument for a procedure.
For more information about strings in Maple, see Maple Language Elements.
Performing an Arithmetic Operation
The arithmetic operators in Maple are + (addition), - (subtraction), * (multiplication), / (division), and ^ (exponentiation). A statement can be an arithmetic operation that contains any
combination of these operators. The standard rules of precedence apply.
Maple computes this result as an exact rational number.
Assigning to a Name
By naming a calculated result or complicated expression, you can reference it. To assign to a name, use the assignment operator, :=.
For more information about names and assignment, see Maple Language Elements.
Using Maple Library Commands
After a value is assigned to a name, for example, the value assigned previously to a, you can use the name as if it were the assigned object. For example, you can use the Maple library command
evalf to compute a floating-point (decimal) approximation to 103993/33102 divided by 2 by entering the following statement.
You can use the Maple library of commands, introduced in The Computation Engine, for many purposes. For example, you can find the derivative of an expression by using the diff command.
> diff(x^2 + x + 1/x, x);
Note the difference between the names used in these two examples. In the first example, a is a variable with an assigned value. In the second example, x is a symbol with no assigned value. Maple
can represent and compute with symbolic expressions.
For more information about the Maple library commands, refer to the Maple User Manual or the help system.
1.4 Procedures
This section introduces the concept of procedures in Maple. For more information about procedures, see Procedures.
Defining a Simple Procedure
A Maple procedure (a type of program) is a group of statements that are processed together. The easiest way to create a Maple procedure is to enclose a sequence of commands, which can be used to
perform a computation interactively, between the proc(...) and end proc statements.
Entering a Procedure Definition
The following procedure generates the string "Hello World". Enter this procedure in a Maple session by entering its definition on one line.
> hello := proc() return "Hello World"; end proc;
You can also enter a procedure or any Maple statement on multiple lines. To move the cursor to the next line as you are entering a multiline statement, hold the Shift key and press Enter at the
end of each line.
Note: This is necessary in the interactive worksheet environment only. If you enter code in a code edit region, you can simply type the text and press Enter to move the cursor to next line. For
more information on code edit regions, refer to the CodeEditRegion help page. For more information about using Shift+Enter, see Unexpected End of Statement.
You can indent lines in a procedure by using the spacebar. After you enter the last line, end proc;, press Enter.
> hello := proc()
return "Hello World";
end proc;
To run this procedure, enter its name followed by a set of parentheses and a semicolon:
Procedures can also accept arguments. Consider the following example.
> half := proc(x)
end proc;
This procedure requires one input, x. The procedure computes the approximation of the value of x divided by 2. When a return statement is not specified, a Maple procedure returns the result of the
last statement that was run. Since evalf(x/2) is the last calculation performed in the procedure half (in fact, it is the only calculation), the result of that calculation is returned.
The procedure is named half by using the := notation in the same way that you would assign any other object to a name. After you have named a procedure, you can use it as a command in the current
Maple session. The syntax to run your procedure is the same syntax used to run a Maple library command: enter the procedure name followed by the input to the procedure enclosed in parentheses.
The basic syntax for a procedure is given below.
The letter P indicates the parameters. The body of the procedure is between the proc and end proc keywords.
Consider the following two statements, which calculate the angle in a right triangle given the lengths of two sides.
> theta := arcsin(opposite/hypotenuse);
The following example shows a procedure that corresponds to these statements. The procedure definition contains two input parameters for the length of two sides of a right triangle.
> GetAngle := proc( opposite, hypotenuse )
local theta;
theta := arcsin(opposite/hypotenuse);
end proc;
When you run the procedure definition, the output shown is the Maple interpretation of this procedure definition. Examine it carefully and note the following characteristics.
• The name of this procedure (program) is GetAngle. Note that Maple is case-sensitive, so GetAngle is distinct from getangle.
• The procedure definition starts with proc( opposite, hypotenuse ). The two names in parentheses indicate the parameters, or inputs, of the procedure.
• Semicolons or colons separate the individual commands of the procedure.
• The local theta; statement declares theta as a local variable. A local variable has meaning in the procedure definition only. Therefore, if you were to declare another variable called theta
outside of the procedure, that variable would be different from the local variable theta declared in the procedure and you could use theta as a variable name outside of the procedure GetAngle
without conflict.
For more information about local variables, see Variables in Procedures.
• Pi is a predefined variable in Maple. Two predefined functions, evalf and arcsin, are used in the calculation.
• The end proc keywords and a colon or semicolon indicate the end of the procedure.
• As you enter the procedure, the commands of the procedure do not display output. The procedure definition is displayed as output only after you complete it with end proc and a semicolon.
• There is no explicit return statement, so the result of calling the procedure is the result of the last calculation.
• The procedure definition that displays in the output is equivalent to, but not identical to, the procedure definition you enter. When Maple parses the statement, the commands of the procedure
may be simplified.
The procedure definition syntax is flexible. You can do the following:
• Enter each statement on one or more lines
• Enter multiple statements on one line, provided they are separated by colons or semicolons
• Place extra semicolons between statements
• Omit the semicolon (or colon) from the statement preceding end proc
To hide the output resulting from a complicated procedure definition, use a colon instead of a semicolon at the end of the definition.
Adding Comments to a Procedure
Consider the following example.
(* this procedure computes an interior angle of a right
You can include single line comments anywhere in the procedure. They begin with a pound character (#). You can also enter multiline comments between (* and *) symbols as shown in the example
Note: Multiline comments cannot be entered in 2-D math notation. As an alternative, in a Maple document, you can enter comments as text by adding a paragraph above or below the Maple statement.
Calling a Procedure
Running a procedure is referred to as an invocation or a procedure call. When you invoke a procedure, Maple runs the statements that form the procedure body one at a time. The result of the last
computed statement within the procedure is returned as the value of the procedure call.
For example, to run the procedure GetAngle--that is, to cause the statements that form the procedure to be run in sequence--enter its name followed by parentheses enclosing the inputs, in this
case, two numbers delimited (separated) by commas (,). End the statement with a semicolon.
Only the result of the last calculation performed within the procedure GetAngle is returned--the result of evalf(180/Pi*theta). The assignment theta:=arcsin(opposite/hypotenuse); is performed, but
the statement result is not displayed.
Maple Library Commands, Built-In Commands, and User-Defined Procedures
Maple comes with a large collection of commands and packages. Before writing custom procedures, refer to the Maple help system to find out which commands are available. You can easily include
complex tasks in your user-defined procedures by using existing Maple commands instead of writing new code.
Maple commands are implemented in one of two formats: those written and compiled in an external language such as C and those written in the Maple programming language.
The commands that are compiled as part of the Maple kernel are referred to as built-in commands. These are widely used in computations, and are fundamental for implementing other Maple commands.
For more information about built-in kernel commands, see The Computation Engine and The builtin Option.
The commands in the Maple library are written in the Maple programming language. These commands exist as individual commands or as packages of commands. They are accessed and interpreted by the
Maple system as required. The code for the library commands and the definitions of user-defined procedures can be viewed and modified. However, before exploring library commands, it is important
that you learn about evaluation rules to understand the code.
Full Evaluation and Last Name Evaluation
For most expressions assigned to a name, such as e defined with the following statement, you can obtain its value by entering its name.
This is called full evaluation--each name in the expression is fully evaluated to the last assigned expression in any chain of assignments. The following statements further illustrate how full
evaluation works.
This group of statements creates the chain of assignments. , and c fully evaluates to 1.
If you try this approach with a procedure, Maple displays only the name of the procedure instead of its value (the procedure definition). For example, in the previous section, GetAngle is defined
as a procedure. If you try to view the body of procedure GetAngle by referring to it by name, the procedure definition is not displayed.
This model of evaluation is called last name evaluation and it hides the procedure details. There are several reasons for this approach relating to advanced evaluation topics. The most important
concept to understand is that you will only see the name of a procedure when you reference it by itself or when it is returned unevaluated; you will not see the full procedure definition. To
obtain the value of the name GetAngle, use the eval command, which forces full evaluation.
Last name evaluation applies to procedures, tables, and modules in Maple. For more information, refer to the last_name_eval help page.
Viewing Procedure Definitions and Maple Library Code
You can learn about programming in Maple by studying the procedure definitions of Maple library commands. To print the body of Maple library commands, set the Maple interface variable verboseproc
to 2, and then use the print command.
For example, to view the procedure definition for the Maple least common multiple command, lcm, enter the following statements.
For more information about interface variables, refer to the interface help page.
> interface(verboseproc = 2):
Because the built-in kernel commands are compiled in machine code, and not written in the Maple language, you cannot view their definitions. If you print the definition of a built-in procedure,
you will see that the procedure has only an option builtin statement and no visible body.
1.5 Interrupting Computations and Clearing the Internal Memory
Interrupting a Maple Computation
To stop a computation, for example, a lengthy calculation or infinite loop, use one of the following three methods.
Note: Maple may not always respond immediately to an interrupt request if it is performing a complex computation. You may need to wait a few seconds before the computation stops.
• Click the stop icon in the toolbar (in worksheet versions).
• Click the interrupt icon in the toolbar (in worksheet versions). See Figure 1.1.
Figure 1.1: Maple Toolbar
Note: For more information on the toolbar icons, refer to the worksheet/reference/WorksheetToolbar help page.
• Hold the Ctrl key and press the C key (in UNIX and Windows command-line versions).
• Hold the Command key and press the period key (.) (in Macintosh command-line and worksheet versions).
To perform a hard interrupt, which stops the computation and exits the Maple session, in the Windows command-line interface, hold the Ctrl key and press the Break key.
Clearing the Maple Internal Memory
Clear the internal memory during a Maple session by entering the restart command or clicking the restart icon in the worksheet toolbar. When you enter this command, the Maple session returns to
its startup state, that is, all identifiers (including variables and procedures) are reset to their initial values.
For more information on clearing the Maple internal memory and the restart command, refer to the restart help page. For more information on the toolbar icons, refer to the worksheet/reference/
WorksheetToolbar help page.
Maple tracks the use of permanent and temporary objects. Its internal garbage collection facility places memory that is no longer in use on free lists so it can be used again efficiently as
needed. For more information on garbage collection and the gc command, see Garbage Collection.
1.6 Avoiding Common Problems
This section provides a list of common mistakes, examples, and hints that will help you understand and avoid common errors. Use this section to study the errors that you may encounter when entering
the examples from this chapter in a Maple session.
Unexpected End of Statement
Most valid statements in Maple must end in either a colon or a semicolon. An error message is displayed if you press Enter in an input region that is incomplete.
Tip: You can use the parse command to find errors in statements, and the Maple debugger to find errors in programs. For more information on the debugger, see The Maple Debugger: A Tutorial Example
or refer to the parse and debugger help pages.
If you press Enter to move the cursor to a new line when you are entering a procedure definition on multiple lines, the following error is displayed.
Warning, premature end of input, use <Shift> + <Enter> to avoid this message.
To prevent this error message from displaying as you enter a procedure definition, hold the Shift key and press Enter at the end of each line, instead of pressing only Enter.
> p := proc()
"Hello World";
end proc;
In 1-D math notation, if you do not enter a trailing semicolon or colon, Maple inserts a semicolon and displays the following warning message.
Warning, inserted missing semicolon at end of statement
Maple also inserts a semicolon after end proc in procedure definitions.
> p := proc()
"Hello World";
end proc
Warning, inserted missing semicolon at end of statement
Missing Operator
The most common error of this type is omitting the multiplication operator.
Error, missing operator or `;`
You can avoid this error by entering an asterisk (*) to indicate multiplication.
Implicit multiplication, which can be used in 2-D math input, is not valid syntax in 1-D math input.
Invalid, Wrong Number or Type of Arguments
An error is displayed if the argument(s) to a Maple library command are incorrect or missing.
Error, invalid input: evalf expects 1 or 2 arguments, but received 0
Warning, solving for expressions other than names or functions is not recommended.
Error, (in solve) a constant is invalid as a variable, 5
Error, (in cos) expecting 1 argument, got 2
If such an error occurs, check the appropriate help page for the correct syntax. Enter ?topic_name at the Maple prompt.
The same type of error message is displayed if you call a user-defined procedure, such as GetAngle, with the wrong number of the arguments.
Unbalanced Parentheses
In complicated expressions or nested commands, it is easy to omit a closing parenthesis.
In a valid statement, each (, {, and [ requires a matching ), }, and ], respectively.
Assignment Versus Equality
When you enter statements in a Maple session, it is important to understand the difference between equality (using =) and assignment (using :=).
The equal sign, =, is used in equality tests or to create equations. Creating an equation is a valid Maple statement.
In the example above, % is a special name that stores the value of the last statement. The solve command is used to isolate y in the equation defined in the first statement. The first statement is
not an assignment; x remains a symbol with no assigned value.
You can use the assignment operator, :=, to assign x the value y^2+3. The assignment operator assigns the value of the right-hand side to the left-hand side. After an assignment is made, the
left-hand side can be used in place of the value of the right-hand side. The left-hand side cannot be a number; it must be a name, indexed name, function call, or sequence of these values.
For more information about equations and Boolean testing, see Boolean and Relational Expressions or refer to the evalb help page. For more information about names and assignment, see Names and
1.7 Exercises
1. Assign the integers 12321, 23432, and 34543 to the names a, b, and c. Use these names to find the sum and difference of each pair of numbers.
2. Write two procedures. The first requires two inputs and finds their sum. The second requires two inputs and finds their product. Use these procedures to add and multiply pairs of numbers. How
could you use these procedures to add and multiply three numbers?
3. Display your procedure definitions. Are they identical to the code you entered to write them?
Contents Previous Next Index
Was this information helpful?
Please add your Comment (Optional)
E-mail Address (Optional)
What is This question helps us to combat spam
|
{"url":"https://cn.maplesoft.com/support/helpJP/Maple/view.aspx?path=ProgrammingGuide/Chapter01&L=C","timestamp":"2024-11-13T12:23:08Z","content_type":"application/xhtml+xml","content_length":"431847","record_id":"<urn:uuid:5653e607-873e-45e6-a5e4-ea1f03382431>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00876.warc.gz"}
|
Forestry Catchment Planner
Melton Ratio
During landslide-triggering rainfall events, debris flows also commonly occur in steep, erosion-susceptible catchments. For this reason, we also need to show susceptibility to debris flows for each
hillslope unit. The most commonly used measure of debris flow susceptibility is the Melton ratio (Melton-R), which measures a catchment's average steepness (Melton 1965) and gives an indication of
the catchment’s ability to generate debris flows. Using the Melton-R, we can classify hillslope units into low (Melton-R < 0.3), medium (Melton-R = 0.3 - 0.6) or high (Melton-R > 0.6) susceptibility
to debris flows. This provides an extra layer of information to assist us in managing landslide risks during the window of vulnerability.
Debris flow occurrence depends on three factors (Welsh and Davies 2010):
1. steep channel slopes combined with
2. availability of large volumes of sediment for mobilisation, either on slopes or in a stream channel
3. rainfall and/or streamflow of sufficient intensity to mobilise the sediment.
These factors also contribute to the susceptibility as assessed by the Rainfall Induced Landslide (RIL) susceptibility model. The Melton-R focuses on the first factor (slope steepness), since steep
channel slopes are far more likely to result in debris flows occurring.
The Melton-R is the ratio between catchment relief (difference between maximum and minimum elevations in the catchment) and the square root of watershed area (Melton 1965). The image below shows the
most common method for estimating the Melton-R, where it is calculated for an entire catchment above the apex of the fan onto which a debris flow would discharge.
Calculation of the Melton-R for a steep catchment (red area). Example illustration based on excerpt from Melton, M. A. (1965). The geomorphic and paleoclimatic significance of alluvial deposits in
southern Arizona. The Journal of Geology, 1-38.
Calculation of the Melton-R
Melton ratio = Relative Relief Ratio
Melton ratio (R) = Hb Ab^-0.5
Hb: basin relief (difference between maximum and minimum elevations in the basin)
Ab: total area of the basin
Example: Alpine Baldy, South Fork Skykomish
Top Elev.: 1,584 m
Bottom Elev.: 464 m
Area: 2,351,050 m²
R = (1584 - 464) * (2,351,050)^-0.5
= 0.73
Source: Melton, M. A. (1965).
To account for both landslides and debris flows, the FCP calculates the Melton-R for each HSU, then overlays these over the HSU’s Rainfall Induced Landslide (RIL) susceptibility, to provide an
overall indication of where landslide sediments are more likely to mobilise as debris flows.
|
{"url":"https://www.docs.forestrycatchmentplanner.nz/melton-ratio","timestamp":"2024-11-09T18:31:42Z","content_type":"text/html","content_length":"66797","record_id":"<urn:uuid:97991995-08e5-4726-a37e-25faea11a5b6>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00575.warc.gz"}
|
My problem is the same,
I create a graph using the boost graph library. Each vertices of the graph is equivalent to a point in 3D space. If two points are within a given threshold distance, I create an edge between them on
the graph.
My problem is that BOOST never seems to separate the points correctly. I have a Matlab script which always successfully partitions my point data into groups of points separated by distance. BOOST
always incorrectly partitions.
I have simplified my code as much as possible to try and find the root of the issue (I no longer explicitly create vertices) but the results are always the same:
adjacency_list<vecS, vecS, undirectedS,Point> Graph;
for (size_t i = 0; i < PointDat.size(); i++)
for (size_t j = i; j < PointDat.size(); j++)
double thisdist = PointDat [i].Distance(PointDat [j]);
if (thisdist < distance && thisdist > 0)
add_edge(i, j, Graph);
std::vector<int> comp(num_vertices(Graph));
int num = connected_components(Graph, &comp[0]);
components_num = num;
components = comp;
Using the data I have here<https://dl.dropboxusercontent.com/u/1584218/test.xyz> (I set all z values to 0) BOOST always finds 938 vertices in the main component while Matlab (correctly) find 920. I
have even exported the edges of the BOOST graph to matlab and it is still able to get the correct result, suggesting the structure of the graph is correct.
Thanks for any help,
Simon Choppin
From: Boost-users [mailto:boost-users-bounces_at_[hidden]] On Behalf Of Marcin Zalewski
Sent: 28 July 2015 14:16
To: boost-users_at_[hidden]
Subject: Re: [Boost-users] Connected components not working Boost Graph library
OK, but then do you have a different problem than the one posed in your original question? Maybe you can show us what problem do you have now?
On Tue, Jul 28, 2015 at 8:50 AM Choppin, Simon <S.Choppin_at_[hidden]<mailto:S.Choppin_at_[hidden]>> wrote:
Not at all!
I thanked the user in the comments for putting in so much work but the issue is not resolved! I use a threshold distance of 15 to separate the points but the connected components are still not
calculated correctly. Iâ m on the verge of trying to write an algorithm from scratch. If you can help at all it would be much appreciated.
Simon C
From: Boost-users [mailto:boost-users-bounces_at_[hidden]<mailto:boost-users-bounces_at_[hidden]>] On Behalf Of Marcin Zalewski
Sent: 28 July 2015 12:00
To: boost-users_at_[hidden]<mailto:boost-users_at_[hidden]>
Subject: Re: [Boost-users] Connected components not working Boost Graph library
I can see that you answered yourself on stackoverflow. :)
On Mon, Jul 27, 2015 at 11:36 AM Choppin, Simon <S.Choppin_at_[hidden]<mailto:S.Choppin_at_[hidden]>> wrote:
Hello all,
I hope you can help me, Iâ m trying to group 3D point data in clusters according to the distance between points. I.e. different groups (or components) are separated by a minimum threshold distance.
To do this I am creating a boost graph (using the Boost Graph library), adding vertices (with 3D point information) and adding edges between nodes of the graph that are within my threshold distance.
However, when I find the connected components on the resulting graph Iâ m getting an incorrect answer. The vertices of the graph are not grouped correctly. I am comparing my results with a Matlab
script (and their proprietary programs) which correctly separates the points.
I have a stackoverflow post with more detail (and no answers) http://stackoverflow.com/questions/27001402/connected-components-boost-c.
If you are able to help in any way Iâ d be very appreciative. I really donâ t want to use compiled Matlab for this solution.
Simon Choppin
Boost-users mailing list
Boost-users mailing list
Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net
|
{"url":"https://lists.boost.org/boost-users/2015/07/84697.php","timestamp":"2024-11-13T14:23:56Z","content_type":"text/html","content_length":"17341","record_id":"<urn:uuid:45a33f08-0243-4c26-84ac-2c9fce851dce>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00203.warc.gz"}
|
Obfuscation factor - Science without sense...double nonsense
Lie to me
Obfuscation factor.
Obfuscation factor helps us, when conducting a survey, to assess the bias that can be caused by false answers o lies.
Today, you are going to let me be a little dirty. Dirty and piggy, as a matter of fact. The thing is that I’ve been recently mulling over something that I’ve noticed a lot of times. Sure that some of
you have noticed it too.
Have you realized how many drivers (and she-drivers, don’t be surprised) take advantage of red lights to take off their boogers?. Some of them, so help me God, even eat their boogers. Yuck!
However, if I ask people around me no one recognizes to do it, so it intrigues me why I have such a bad luck of running into the most piggy neighborhood while I’m stopped at a red light. Of course,
the reason might be that people whom I ask feel embarrassed to admit they practice such an unhealthy habit.
It seems that knowing the truth poses a huge problem. Imagine that I want to take a survey. I go to traffic offices, I get a list of drivers phone numbers and I start calling people asking them: do
you take off your boogers while in a red light?.
Survey’s bias
Any survey you do can be distorted by four sources of error.
The first one is selection bias, when you do a wrong choice of respondents. If I only call to people from preppy neighborhoods most of them will answer “no” (and not because they don’t do it, but
because they will qualm about confessing the truth). The second source of error is the “no answer” one: many respondents will hung up the phone without answering, giving me regards to my family, by
the way. The third source is recall bias. This means that the respondent says he or she doesn’t remember the answer to my question. I think this would apply little to our example. What we will found
a lot in our survey is the four source of error: lie.
This fact is well-known to Financial Minister’s people. They are very used to people trying to cheat them. If they call you asking if you’ve ever cheated with taxes, what will you answer?.
But, can we do something to get rid of lie?. Well, except for doing the questions in person applying truth serum to respondents, we can’t get rid of it completely, but we can minimize it a lot with a
little trick.
Obfuscation factor
Let’s think that I propose the following game to my telephone respondents: roll a die and, if you get one or two, answer me that you take off your boogers, even if it is a lie and you don’t. On the
other way, if the die takes out three to six, answer me the truth. In any case, what you never say to me is the number you got rolling the die.
Thus, the subject that I’m asking will understand that I cannot know if he or she is telling truth or lie and so he or she will be less likely to lie. This protection of respondent’s privacy implies
that I cannot know the true answer of each of the respondents but, in return, I can know the aggregate behavior of the sample of respondents, although always with some uncertainty. How we do it?.
Let’s develop our example.
First, we’re going to think about who will answer “yes”. On the one hand, those who take out one or two with the dice will answer “yes”. We call p the probability of this event (2/6 in our example).
If I ask to n number of persons, we’ll come up with a total of n times p people (we get this results calculating the sum of hits in a series by applying the binomial probability theory).
On the other hand, people who take three to six with the dice and get their boogers removed at red lights will also answer “yes”. The number is n (total respondents) multiplied by the probability of
the outcome of the dice (1-p, 4/6 in our example) and multiply by the probability of practicing such a dirty habit (its prevalence, Pr, which is precisely what we want to know).
So if we add both numbers of “yes”, required and truthful, we will come up with the following formula, were m are those who answer they removed their boogers:
m = np + n(1-p)Pr
And now we can solve Pr using our broad knowledge about algebra:
Pr = [(m/n)-p] / 1-p
Suppose we surveyed 100 individuals and 62 out of them answered “yes”. How many of them actually eat boogers regularly?. Substituting values in our formula (m=62, n=100, p=2/6) yields a figure of
0.43. That means that at least 43% of people take advantage of red lights to do mining working. And the real figure is probably higher, because some people will still lie despite our clever ruse.
p is usually call obfuscation factor and we can get its value using dice, coins or whatever. But be careful we you choose its value. If it is too large the subject will feel more confident to answer
honestly, but the uncertainty in our calculation will be higher. On the other hand, the smaller the p, the scarier the respondent will be that we link him with the real answer, so he will be prone to
lie through his teeth. As always, in the middle is virtue.
We’re leaving…
Those who haven’t gone to vomit by now have seen how we have used binomial probability calculation to address such a disgusting issue. By the way, if you think about it, what we have done resembles
the calculation of a disease prevalence in a population knowing the sensitivity and specificity of a diagnostic test. But that’s another story…
Your email address will not be published. Required fields are marked *
Información básica sobre protección de datos Ver más
• Responsable: Manuel Molina Arias.
• Finalidad: Moderar los comentarios.
• Legitimación: Por consentimiento del interesado.
• Destinatarios y encargados de tratamiento: No se ceden o comunican datos a terceros para prestar este servicio. El Titular ha contratado los servicios de alojamiento web a Aleph que actúa como
encargado de tratamiento.
• Derechos: Acceder, rectificar y suprimir los datos.
• Información Adicional: Puede consultar la información detallada en la Política de Privacidad.
This site uses Akismet to reduce spam. Learn how your comment data is processed.
|
{"url":"https://www.cienciasinseso.com/en/obfuscation-factor/","timestamp":"2024-11-11T14:11:20Z","content_type":"text/html","content_length":"73252","record_id":"<urn:uuid:26b7da8e-562d-40fc-a8a8-2e60889d4c2f>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00703.warc.gz"}
|
Ace the TABE Math in 30 Days
Product details
• Page:213 pages
• Language:English
• ISBN-10:1970036621
• ISBN-13:978-1970036626
The goal of this book is simple. It will help you incorporate the most effective method and the right strategies to prepare for the TABE Math test quickly and effectively.
Ace the TABE Math in 30 Days, which reflects the 2019 test guidelines and topics, is designed to help you hone your math skills, overcome your exam anxiety, and boost your confidence -- and do your
best to defeat TABE Math Test. This TABE Math new edition has been updated to replicate questions appearing on the most recent TABE Math tests. This is a precious learning tool for TABE Math test
takers who need extra practice in math to improve their TABE Math score. After reviewing this book, you will have solid foundation and adequate practice that is necessary to ace the TABE Math test.
This book is your ticket to ace the TABE Math!
Ace the TABE Math in 30 Days provides students with the confidence and math skills they need to succeed on the TABE Math, providing a solid foundation of basic Math topics with abundant exercises for
each topic. It is designed to address the needs of TABE test takers who must have a working knowledge of basic Math.
Inside the pages of this comprehensive book, students can learn math topics in a structured manner with a complete study program to help them understand essential math skills. It also has many
exciting features, including:
• Content 100% aligned with the 2019 TABE test
• Written by TABE Math tutors and test experts
• Complete coverage of all TABE Math concepts and topics which you will be tested
• Step-by-step guide for all TABE Math topics
• Dynamic design and easy-to-follow activities
• Over 2,500 additional TABE math practice questions in both multiple-choice and grid-in formats with answers grouped by topic, so you can focus on your weak areas
• Abundant Math skill building exercises to help test-takers approach different question types that might be unfamiliar to them
• Exercises on different TABE Math topics such as integers, percent, equations, polynomials, exponents and radicals
• 2 full-length practice tests (featuring new question types) with detailed answers
Effortlessly and confidently follow the step-by-step instructions in this book to ace the TABE Math in a short period of time.
Ace the TABE Math in 30 Days is the only book you'll ever need to master Basic Math topics! It can be used as a self-study course - you do not need to work with a Math tutor. (It can also be used
with a Math tutor).
You’ll be surprised how fast you master the Math topics covering on TABE Math Test.
Ideal for self-study as well as for classroom usage.
|
{"url":"https://testinar.com/product.aspx?P_ID=C7Qw6lpLMPN3FlSCMuSOlw%3D%3D","timestamp":"2024-11-06T04:51:05Z","content_type":"text/html","content_length":"55485","record_id":"<urn:uuid:d8ebce91-e35f-4cf7-a2a3-6440e85f6770>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00106.warc.gz"}
|
Chapter 18. Relaxed Cone Stepping for Relief Mapping
Fabio Policarpo
Perpetual Entertainment
Manuel M. Oliveira
Instituto de Informática—UFRGS
18.1 Introduction
The presence of geometric details on object surfaces dramatically changes the way light interacts with these surfaces. Although synthesizing realistic pictures requires simulating this interaction as
faithfully as possible, explicitly modeling all the small details tends to be impractical. To address these issues, an image-based technique called relief mapping has recently been introduced for
adding per-fragment details onto arbitrary polygonal models (Policarpo et al. 2005). The technique has been further extended to render correct silhouettes (Oliveira and Policarpo 2005) and to handle
non-height-field surface details (Policarpo and Oliveira 2006). In all its variations, the ray-height-field intersection is performed using a binary search, which refines the result produced by some
linear search procedure. While the binary search converges very fast, the linear search (required to avoid missing large structures) is prone to aliasing, by possibly missing some thin structures, as
is evident in Figure 18-1a. Several space-leaping techniques have since been proposed to accelerate the ray-height-field intersection and to minimize the occurrence of aliasing (Donnelly 2005, Dummer
2006, Baboud and Décoret 2006). Cone step mapping (CSM) (Dummer 2006) provides a clever solution to accelerate the intersection calculation for the average case and avoids skipping height-field
structures by using some precomputed data (a cone map). However, because CSM uses a conservative approach, the rays tend to stop before the actual surface, which introduces different kinds of
artifacts, highlighted in Figure 18-1b. Using an extension to CSM that consists of employing four different radii for each fragment (in the directions north, south, east, and west), one can just
slightly reduce the occurrence of these artifacts. We call this approach quad-directional cone step mapping (QDCSM). Its results are shown in Figure 18-1c, which also highlights the technique's
Figure 18-1 Comparison of Four Different Ray-Height-Field Intersection Techniques Used to Render a Relief-Mapped Surface from a 256x256 Relief Texture
In this chapter, we describe a new ray-height-field intersection strategy for per-fragment displacement mapping that combines the strengths of both cone step mapping and binary search. We call the
new space-leaping algorithm relaxed cone stepping (RCS), as it relaxes the restriction used to define the radii of the cones in CSM. The idea for the ray-height-field intersection is to replace the
linear search with an aggressive spaceleaping approach, which is immediately followed by a binary search. While CSM conservatively defines the radii of the cones in such a way that a ray never
pierces the surface, RCS allows the rays to pierce the surface at most once. This produces much wider cones, accelerating convergence. Once we know a ray is inside the surface, we can safely apply a
binary search to refine the position of the intersection. The combination of RCS and binary search produces renderings of significantly higher quality, as shown in Figure 18-1d. Note that both the
aliasing visible in Figure 18-1a and the distortions noticeable in Figures 18-1b and 18-1c have been removed. As a space-leaping technique, RCS can be used with other strategies for refining
ray-height-field intersections, such as the one used by interval mapping (Risser et al. 2005).
18.2 A Brief Review of Relief Mapping
Relief mapping (Policarpo et al. 2005) simulates the appearance of geometric surface details by shading individual fragments in accordance to some depth and surface normal information that is mapped
onto polygonal models. A depth map ^[1] (scaled to the [0,1] range) represents geometric details assumed to be under the polygonal surface. Depth and normal maps can be stored as a single RGBA
texture (32-bit per texel) called a relief texture (Oliveira et al. 2000). For better results, we recommend separating the depth and normal components into two different textures. This way texture
compression will work better, because a specialized normal compression can be used independent of the depth map compression, resulting in higher compression ratios and fewer artifacts. It also
provides better performance because during the relief-mapping iterations, only the depth information is needed and a one-channel texture will be more cache friendly (the normal information will be
needed only at the end for lighting). Figure 18-2 shows the normal and depth maps of a relief texture whose cross section is shown in Figure 18-3. The mapping of relief details to a polygonal model
is done in the conventional way, by assigning a pair of texture coordinates to each vertex of the model. During rendering, the depth map can be dynamically rescaled to achieve different effects, and
correct occlusion is achieved by properly updating the depth buffer.
Relief rendering is performed entirely on the GPU and can be conceptually divided into three steps. For each fragment f with texture coordinates (s, t), first transform the view direction V to the
tangent space of f. Then, find the intersection P of the transformed viewing ray against the depth map. Let (k, l) be the texture coordinates of such intersection point (see Figure 18-3). Finally,
use the corresponding position of P, expressed in camera space, and the normal stored at (k, l) to shade f. Self-shadowing can be applied by checking whether the light ray reaches P before reaching
any other point on the relief. Figure 18-3 illustrates the entire process. Proper occlusion among relief-mapped and other scene objects is achieved simply by updating the z-buffer with the z
coordinate of P (expressed in camera space and after projection and division by w). This updated z-buffer also supports the combined use of shadow mapping (Williams 1978) with relief-mapped surfaces.
In practice, finding the intersection point P can be entirely performed in 2D texture space. Thus, let (u, v) be the 2D texture coordinates corresponding to the point where the viewing ray reaches
depth = 1.0 (Figure 18-3). We compute (u, v) based on (s, t), on the transformed viewing direction and on the scaling factor applied to the depth map. We then perform the search for P by sampling the
depth map, stepping from (s, t) to (u, v), and checking if the viewing ray has pierced the relief (that is, whether the depth along the viewing ray is bigger than the stored depth) before reaching (u
, v). If we have found a place where the viewing ray is under the relief, the intersection P is refined using a binary search.
Although the binary search quickly converges to the intersection point and takes advantage of texture filtering, it could not be used in the beginning of the search process because it may miss some
large structures. This situation is depicted in Figure 18-4a, where the depth value stored at the texture coordinates halfway from (s, t) and (u, v) is bigger than the depth value along the viewing
ray at point 1, even though the ray has already pierced the surface. In this case, the binary search would incorrectly converge to point Q. To minimize such aliasing artifacts, Policarpo et al.
(2005) used a linear search to restrict the binary search space. This is illustrated in Figure 18-4b, where the use of small steps leads to finding point 3 under the surface. Subsequently, points 2
and 3 are used as input to find the desired intersection using a binary search refinement. The linear search itself, however, is also prone to aliasing in the presence of thin structures, as can be
seen in Figure 18-1a. This has motivated some researchers to propose the use of additional preprocessed data to avoid missing such thin structures (Donnelly 2005, Dummer 2006, Baboud and Décoret
2006). The technique described in this chapter was inspired by the cone step mapping work of Dummer, which is briefly described next.
18.3 Cone Step Mapping
Dummer's algorithm for computing the intersection between a ray and a height field avoids missing height-field details by using cone maps (Dummer 2006). A cone map associates a circular cone to each
texel of the depth texture. The angle of each cone is the maximum angle that would not cause the cone to intersect the height field. This situation is illustrated in Figure 18-5 for the case of three
texels at coordinates (s, t), (a, b), and (c, d), whose cones are shown in yellow, blue, and green, respectively.
Starting at fragment f, along the transformed viewing direction, the search for an intersection proceeds as follows: intersect the ray with the cone stored at (s, t), obtaining point 1 with texture
coordinates (a, b). Then advance the ray by intersecting it with the cone stored at (a, b), thus obtaining point 2 at texture coordinates (c, d). Next, intersect the ray with the cone stored at (c, d
), obtaining point 3, and so on. In the case of this simple example, point 3 coincides with the desired intersection. Although cone step mapping is guaranteed never to miss the first intersection of
a ray with a height field, it may require too many steps to converge to the actual intersection. For performance reasons, however, one is often required to specify a maximum number of iterations. As
a result, the ray tends to stop before the actual intersection, implying that the returned texture coordinates used to sample the normal and color maps are, in fact, incorrect. Moreover, the 3D
position of the returned intersection, P', in camera space, is also incorrect. These errors present themselves as distortion artifacts in the rendered images, as can be seen in Figures 18-1b and
18.4 Relaxed Cone Stepping
Cone step mapping, as proposed by Dummer, replaces both the linear and binary search steps described in Policarpo et al. 2005 with a single search based on a cone map. A better and more efficient
ray-height-field intersection algorithm is achieved by combining the strengths of both approaches: the space-leaping properties of cone step mapping followed by the better accuracy of the binary
search. Because the binary search requires one input point to be under and another point to be over the relief surface, we can relax the constraint that the cones in a cone map cannot pierce the
surface. In our new algorithm, instead, we force the cones to actually intersect the surface whenever possible. The idea is to make the radius of each cone as large as possible, observing the
following constraint: As a viewing ray travels inside a cone, it cannot pierce the relief more than once. We call the resulting space-leaping algorithm relaxed cone stepping. Figure 18-7a (in the
next subsection) compares the radii of the cones used by the conservative cone stepping (blue) and by relaxed cone stepping (green) for a given fragment in a height field. Note that the radius used
by RCS is considerably larger, making the technique converge to the intersection using a smaller number of steps. The use of wider relaxed cones eliminates the need for the linear search and,
consequently, its associated artifacts. As the ray pierces the surface once, it is safe to proceed with the fast and more accurate binary search.
18.4.1 Computing Relaxed Cone Maps
As in CSM, our approach requires that we assign a cone to each texel of the depth map. Each cone is represented by its width/height ratio (ratio w/h, in Figure 18-7c). Because a cone ratio can be
stored in a single texture channel, both a depth and a cone map can be stored using a single luminance-alpha texture. Alternatively, the cone map could be stored in the blue channel of a relief
texture (with the first two components of the normal stored in the red and green channels only).
For each reference texel t[i] on a relaxed cone map, the angle of cone C[i] centered at t[i] is set so that no viewing ray can possibly hit the height field more than once while traveling inside C[i]
. Figure 18-7b illustrates this situation for a set of viewing rays and a given cone shown in green. Note that cone maps can also be used to accelerate the intersection of shadow rays with the height
field. Figure 18-6 illustrates the rendering of self-shadowing, comparing the results obtained with three different approaches for rendering perfragment displacement mapping: (a) relief mapping using
linear search, (b) cone step mapping, and (c) relief mapping using relaxed cone stepping. Note the shadow artifacts resulting from the linear search (a) and from the early stop of CSM (b).
Figure 18-6 Rendering Self-Shadowing Using Different Approaches
Relaxed cones allow rays to enter a relief surface but never leave it. We create relaxed cone maps offline using an O(n ^2) algorithm described by the pseudocode shown in Listing 18-1. The idea is,
for each source texel ti, trace a ray through each destination texel tj, such that this ray starts at (ti.texCoord.s, ti.texCoord.t, 0.0) and points to (tj.texCoord.s, tj.texCoord.t, tj.depth). For
each such ray, compute its next (second) intersection with the height field and use this intersection point to compute the cone ratio cone_ratio(i,j). Figure 18-7c illustrates the situation for a
given pair of (ti, tj) of source and destination texels. C[i] 's final ratio is given by the smallest of all cone ratios computed for t[i] , which is shown in Figure 18-7b. The relaxed cone map is
obtained after all texels have been processed as source texels.
Example 18-1. Pseudocode for Computing Relaxed Cone Maps
for each
reference texel ti do radius_cone_C(i) = 1;
source.xyz = (ti.texCoord.s, ti.texCoord.t, 0.0);
for each
destination texel tj do destination.xyz =
(tj.texCoord.s, tj.texCoord.t, tj.depth);
ray.origin = destination;
ray.direction = destination - source;
(k, w) = text_cords_next_intersection(tj, ray, depth_map);
d = depth_stored_at(k, w);
if ((d - ti.depth) > 0.0)
// dst has to be above the src
cone_ratio(i, j) = length(source.xy - destination.xy) / (d - tj.depth);
if (radius_cone_C(i) > cone_ratio(i, j))
radius_cone_C(i) = cone_ratio(i, j);
Note that in the pseudocode shown in Listing 18-1, as well as in the actual code shown in Listing 18-2, we have clamped the maximum cone ratio values to 1.0. This is done to store the cone maps using
integer textures. Although the use of floating-point textures would allow us to represent larger cone ratios with possible gains in space leaping, in practice we have observed that usually only a
small subset of the texels in a cone map would be able to take advantage of that. This is illustrated in the relaxed cone map shown in Figure 18-8c. Only the saturated (white) texels would be
candidates for having cone ratios bigger than 1.0.
Figure 18-8 A Comparison of Different Kinds of Cone Maps Computed for the Depth Map Shown in
Listing 18-2 presents a shader for generating relaxed cone maps. Figure 18-8 compares three different kinds of cone maps for the depth map associated with the relief texture shown in Figure 18-2. In
Figure 18-8a, one sees a conventional cone map (Dummer 2006) stored using a single texture channel. In Figure 18-8b, we have a quad-directional cone map, which stores cone ratios for the four major
directions into separate texture channels. Notice how different areas in the texture are assigned wider cones for different directions. Red texels indicate cones that are wider to the right, while
green ones are wider to the left. Blue texels identify cones that are wider to the bottom, and black ones are wider to the top. Figure 18-8c shows the corresponding relaxed cone map, also stored
using a single texture channel. Note that its texels are much brighter than the corresponding ones in the conventional cone map in Figure 18-8a, revealing its wider cones.
Example 18-2. A Preprocess Shader for Generating Relaxed Cone Maps
float4 depth2relaxedcone(in float2 TexCoord
: TEXCOORD0, in Sampler2D ReliefSampler,
in float3 Offset)
: COLOR
const int search_steps = 128;
float3 src = float3(TexCoord, 0);
// Source texel
float3 dst = src + Offset;
// Destination texel
dst.z = tex2D(ReliefSampler, dst.xy).w;
// Set dest. depth
float3 vec = dst - src; // Ray direction
vec /= vec.z;
// Scale ray direction so that vec.z = 1.0
vec *= 1.0 - dst.z;
// Scale again
float3 step_fwd = vec / search_steps;
// Length of a forward step
// Search until a new point outside the surface
float3 ray_pos = dst + step_fwd;
for (int i = 1; i < search_steps; i++)
float current_depth = tex2D(ReliefSampler, ray_pos.xy).w;
if (current_depth < = ray_pos.z)
ray_pos += step_fwd;
// Original texel depth
float src_texel_depth = tex2D(ReliefSampler, TexCoord).w;
// Compute the cone ratio
float cone_ratio =
(ray_pos.z > = src_texel_depth)
? 1.0
: length(ray_pos.xy - TexCoord) / (src_texel_depth - ray_pos.z);
// Check for minimum value with previous pass result
float best_ratio = tex2D(ResultSampler, TexCoord).x;
if (cone_ratio > best_ratio)
cone_ratio = best_ratio;
return float4(cone_ratio, cone_ratio, cone_ratio, cone_ratio);
18.4.2 Rendering with Relaxed Cone Maps
To shade a fragment, we step along the viewing ray as it travels through the depth texture, using the relaxed cone map for space leaping. We proceed along the ray until we reach a point inside the
relief surface. The process is similar to what we described in Section 18.3 for conventional cone maps. Figure 18-9 illustrates how to find the intersection between a transformed viewing ray and a
cone. First, we scale the vector representing the ray direction by dividing it by its z component (ray.direction.z), after which, according to Figure 18-9, one can write
Equation 1
Equation 2
Solving Equations 1 and 2 for d gives the following:
Equation 3
From Equation 3, we compute the intersection point I as this:
Equation 4
The code in Listing 18-3 shows the ray-intersection function for relaxed cone stepping. For performance reasons, the first loop iterates through the relaxed cones for a fixed number of steps. Note
the use of the saturate() function when calculating the distance to move. This guarantees that we stop on the first visited texel for which the viewing ray is under the relief surface. At the end of
this process, we assume the ray has pierced the surface once and then start the binary search for refining the coordinates of the intersection point. Given such coordinates, we then shade the
fragment as described in Section 18.2.
Example 18-3. Ray Intersect with Relaxed Cone
// Ray intersect depth map using relaxed cone stepping.
// Depth value stored in alpha channel (black at object surface)
// and relaxed cone ratio stored in blue channel.
void ray_intersect_relaxedcone(sampler2D relief_map,
// Relaxed cone map
inout float3 ray_pos,
// Ray position
inout float3 ray_dir)
// Ray direction
const int cone_steps = 15;
const int binary_steps = 6;
ray_dir /= ray_dir.z;
// Scale ray_dir
float ray_ratio = length(ray_dir.xy);
float3 pos = ray_pos;
for (int i = 0; i < cone_steps; i++)
float4 tex = tex2D(relief_map, pos.xy);
float cone_ratio = tex.z;
float height = saturate(tex.w – pos.z);
float d = cone_ratio * height / (ray_ratio + cone_ratio);
pos += ray_dir * d;
// Binary search initial range and initial position
float3 bs_range = 0.5 * ray_dir * pos.z;
float3 bs_position = ray_pos + bs_range;
for (int i = 0; i < binary_steps; i++)
float4 tex = tex2D(relief_map, bs_position.xy);
bs_range *= 0.5;
if (bs_position.z < tex.w)
// If outside
bs_position += bs_range;
// Move forward
bs_position -= bs_range;
// Move backward
Let f be the fragment to be shaded and let K be the point where the viewing ray has stopped (that is, just before performing the binary search), as illustrated in Figure 18-10. If too few steps were
used, the ray may have stopped before reaching the surface. Thus, to avoid skipping even thin height-field structures (see the example shown in Figure 18-4a), we use K as the end point for the binary
search. In this case, if the ray has not pierced the surface, the search will converge to point K.
Figure 18-10 The Viewing Ray Through Fragment , with Texture Coordinates ()
Let (m, n) be the texture coordinates associated to K and let d[K] be the depth value stored at (m, n) (see Figure 18-10). The binary search will then look for an intersection along the line segment
ranging from points H to K, which corresponds to texture coordinates ((s + m)/2, (t+n)/2) to (m, n), where (s, t) are the texture coordinates of fragment f (Figure 18-10). Along this segment, the
depth of the viewing ray varies linearly from (d[K] /2) to d[K] . Note that, instead, one could use (m, n) and (q, r) (the texture coordinates of point J, the previously visited point along the ray)
as the limits for starting the binary search refinement. However, because we are using a fixed number of iterations for stepping over the relaxed cone map, saving (q, r) would require a conditional
statement in the code. According to our experience, this tends to increase the number of registers used in the fragment shader. The graphics hardware has a fixed number of registers and it runs as
many threads as it can fit in its register pool. The fewer registers we use, the more threads we will have running at the same time. The latency imposed by the large number of dependent texture reads
in relief mapping is hidden when multiple threads are running simultaneously. More-complex code in the loops will increase the number of registers used and thus reduce the number of parallel threads,
exposing the latency from the dependent texture reads and reducing the frame rate considerably. So, to keep the shader code shorter, we start the binary search using H and K as limits. Note that
after only two iterations of the binary search, one can expect to have reached a search range no bigger than the one defined by the points J and K.
It should be clear that the use of relaxed cone maps could still potentially lead to some distortion artifacts similar to the ones produced by regular (conservative) cone maps (Figure 18-1b). In
practice, they tend to be significantly less pronounced for the same number of steps, due to the use of wider cones. According to our experience, the use of 15 relaxed cone steps seems to be
sufficient to avoid such artifacts in typical height fields.
18.5 Conclusion
The combined use of relaxed cone stepping and binary search for computing rayheight-field intersection significantly reduces the occurrence of artifacts in images generated with per-fragment
displacement mapping. The wider cones lead to more-efficient space leaping, whereas the binary search accounts for more accuracy. If too few cone stepping iterations are used, the final image might
present artifacts similar to the ones found in cone step mapping (Dummer 2006). In practice, however, our technique tends to produce significantly better results for the same number of iterations or
texture accesses. This is an advantage, especially for the new generations of GPUs, because although both texture sampling and computation performance have been consistently improved, computation
performance is scaling faster than bandwidth.
Relaxed cone stepping integrates itself with relief mapping in a very natural way, preserving all of its original features. Figure 18-11 illustrates the use of RCS in renderings involving depth
scaling (Figures 18-11b and 18-11d) and changes in tiling factors (Figures 18-11c and 18-11d). Note that these effects are obtained by appropriately adjusting the directions of the viewing rays
(Policarpo et al. 2005) and, therefore, not affecting the cone ratios.
Figure 18-11 Images Showing Changes in Apparent Depth and Tiling Factors
Mipmapping can be safely applied to color and normal maps. Unfortunately, conventional mipmapping should not be applied to cone maps, because the filtered values would lead to incorrect
intersections. Instead, one should compute the mipmaps manually, by conservatively taking the minimum value for each group of pixels. Alternatively, one can sample the cone maps using a
nearest-neighbors strategy. In this case, when an object is seen from a distance, the properly sampled color texture tends to hide the aliasing artifacts resulting from the sampling of a
high-resolution cone map. Thus, in practice, the only drawback of not applying mipmapping to the cone map is the performance penalty for not taking advantage of sampling smaller textures.
18.5.1 Further Reading
Relief texture mapping was introduced in Oliveira et al. 2000 using a two-pass approach consisting of a prewarp followed by conventional texture mapping. The prewarp, based on the depth map, was
implemented on the CPU and the resulting texture sent to the graphics hardware for the final mapping. With the introduction of fragment processors, Policarpo et al. (2005) generalized the technique
for arbitrary polygonal models and showed how to efficiently implement it on a GPU. This was achieved by performing the ray-height-field intersection in 2D texture space. Oliveira and Policarpo
(2005) also described how to render curved silhouettes by fitting a quadric surface at each vertex of the model. Later, they showed how to render relief details in preexisting applications using a
minimally invasive approach (Policarpo and Oliveira 2006a). They have also generalized the technique to map non-height-field structures onto polygonal models and introduced a new class of impostors
(Policarpo and Oliveira 2006b). More recently, Oliveira and Brauwers (2007) have shown how to use a 2D texture approach to intersect rays against depth maps generated under perspective projection and
how to use these results to render real-time refractions of distant environments through deforming objects.
18.6 References
Baboud, Lionel, and Xavier Décoret. 2006. "Rendering Geometry with Relief Textures." In Proceedings of Graphics Interface 2006.
Donnelly, William. 2005. "Per-Pixel Displacement Mapping with Distance Functions." In GPU Gems 2, edited by Matt Pharr, pp. 123–136. Addison-Wesley.
Dummer, Jonathan. 2006. "Cone Step Mapping: An Iterative Ray-Heightfield Intersection Algorithm." Available online at http://www.lonesock.net/files/ConeStepMapping.pdf.
Oliveira, Manuel M., Gary Bishop, and David McAllister. 2000. "Relief Texture Mapping." In Proceedings of SIGGRAPH 2000, pp. 359–368.
Oliveira, Manuel M., and Fabio Policarpo. 2005. "An Efficient Representation for Surface Details." UFRGS Technical Report RP-351. Available online at http://www.inf.ufrgs.br/~oliveira/pubs_files/
Oliveira, Manuel M., and Maicon Brauwers. 2007. "Real-Time Refraction Through Deformable Objects." In Proceedings of the 2007 Symposium on Interactive 3D Graphics and Games, pp. 89–96.
Policarpo, Fabio, Manuel M. Oliveira, and João Comba. 2005. "Real-Time Relief Mapping on Arbitrary Polygonal Surfaces." In Proceedings of the 2005 Symposium on Interactive 3D Graphics and Games, pp.
Policarpo, Fabio, and Manuel M. Oliveira. 2006a. "Rendering Surface Details in Games with Relief Mapping Using a Minimally Invasive Approach." In SHADER X4: Advance Rendering Techniques, edited by
Wolfgang Engel, pp. 109–119. Charles River Media, Inc.
Policarpo, Fabio, and Manuel M. Oliveira. 2006b. "Relief Mapping of Non-Height-Field Surface Details." In Proceedings of the 2006 Symposium on Interactive 3D Graphics and Games, pp. 55–62.
Risser, Eric, Musawir Shah, and Sumanta Pattanaik. 2005. "Interval Mapping." University of Central Florida Technical Report. Available online at http://graphics.cs.ucf.edu/IntervalMapping/images/
Williams, Lance. 1978. "Casting Curved Shadows on Curved Surfaces." In Computer Graphics (Proceedings of SIGGRAPH 1978) 12(3), pp. 270–274.
|
{"url":"https://developer.nvidia.com/gpugems/gpugems3/part-iii-rendering/chapter-18-relaxed-cone-stepping-relief-mapping","timestamp":"2024-11-07T12:13:26Z","content_type":"text/html","content_length":"62788","record_id":"<urn:uuid:3e7f0902-53f3-419d-9739-c916140f4779>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00626.warc.gz"}
|
What number is the Y in the alphabet? - Answers
It is 5 to 21 (taking y to be a vowel).
The alphabet has 26 letters. The letter "Z" comes after the letter "Y".The English alphabet, in order:ABCDEFGHIJKLMNOPQRSTUVWXYZ
it is Y because if you take out Z from the whole alphabet its Y
The Alphabet from A to Y with Bonus Letter Z was created in 2007.
Y=You Y=The 25th letter of the alphabet. Yx2=25x2=50 So no, you become a number greater than 40.
|
{"url":"https://math.answers.com/math-and-arithmetic/What_number_is_the_Y_in_the_alphabet","timestamp":"2024-11-05T07:36:40Z","content_type":"text/html","content_length":"160137","record_id":"<urn:uuid:81f8757c-fe7a-45eb-94be-32d81e994893>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00344.warc.gz"}
|
2.6.4: Circle Pyramid with Comments — QuizWhiz Homework Help
Final answer:A circle pyramid is a geometric shape that combines a pyramid base with a curved surface that forms a circle at the top. It is used in various mathematical applications, such as finding
the volume and surface area of the pyramid.Explanation:A circle pyramid is a geometric shape that combines a pyramid base with a curved surface that forms a circle at the top. It is used in various
mathematical applications, such as finding the volume and surface area of the pyramid. The circle fits within a square with an area of less than 4 m², emphasizing its compactness and efficiency. This
shape demonstrates the versatility and connections between geometry and mathematics....
Unlock full access for 72 hours, watch your grades skyrocket.
For just $0.99 cents, get access to the powerful quizwhiz chrome extension that automatically solves your homework using AI. Subscription renews at $5.99/week.
|
{"url":"https://quizwhiz.org/questions-and-answers/264-circle-pyramid-with-comments","timestamp":"2024-11-06T19:23:08Z","content_type":"text/html","content_length":"20631","record_id":"<urn:uuid:3c0daecf-8ebc-479b-bcde-90fd59dfa740>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00196.warc.gz"}
|
C2 Binomial expansion question.
Th is question is on page 78 of the C2 book if more information is needed.
Q.10 (question number is in the book), part a.
In the binomial expansion of (2k+x)^n, where k is a constant and n is a positive integer, the coefficient x^2 is equal to the coefficient of x^3.
Prove that n= 6k+2
Q.13 part b
Given that (2+x)^5 +(2-x)^5 = (indentical) A + Bx^2 +Cx^4
I found that A=64 B=160 C=20
Now using the the substitution of y= x^2 and the answers to part a, solve
(2+x)^5 + (2-x)^5=349
i dont understand the part about subtituting y= x^2
You sure the first question is correct? I keep getting n = 2k + 2.
|
{"url":"https://www.thestudentroom.co.uk/showthread.php?t=205848","timestamp":"2024-11-06T00:48:23Z","content_type":"text/html","content_length":"311616","record_id":"<urn:uuid:1d7d3e22-6a1d-43fc-bec0-abf3e78fd187>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00091.warc.gz"}
|
Identifying the Graph of a Quadratic Function
Question Video: Identifying the Graph of a Quadratic Function Mathematics • Third Year of Preparatory School
Choose the graph that represents the function π (π ₯) = 2π ₯Β² + 2. [A] Graph A [B] Graph B [C] Graph C [D] Graph D [E] Graph E
Video Transcript
Choose the graph that represents the function π of π ₯ equals two π ₯ squared plus two.
The function weβ ve been given is a quadratic function of the form π of π ₯ equals π π ₯ squared plus π . In this case, the values of π and π are each equal to two. From this, we can
deduce that the graph that represents this function is a parabola with the π ¦-axis as its a line of symmetry. Because the value of π , the coefficient of π ₯ squared, is positive, the parabola
will be U shaped. It will open upwards. We can find the π ¦-intercept of the curve by evaluating π of zero because π ₯ is equal to zero on the π ¦-axis. This gives π of zero equals two
multiplied by zero squared plus two. Thatβ s zero plus two, which is equal to two.
We can also recall that in general the π ¦-intercept of the function π of π ₯ equals π π ₯ squared plus π is π . So our value of two is consistent with the value of two as the constant
term in our given function.
So now, we know that weβ re looking for a U-shaped parabola with a π ¦-intercept of two. As this point is on the π ¦-axis, which is the line of symmetry for this parabola, the point with
coordinates zero, two will also be the vertex of the curve. Based on this, we can rule out options (B), (D), and (E). Option (B) is a U-shaped parabola with the π ¦-axis as its line of symmetry, but
its vertex is the point zero, zero. (D) and (E) are n-shaped parabolas or parabolas that open downwards. So they correspond to quadratic functions with a negative coefficient of π ₯ squared.
Weβ re left with options (A) and (C). These are both parabolas which open upwards, they both have the π ¦-axis as their line of symmetry, and they each have their vertex at the point zero, two. To
decide which is the correct graph, we can choose any other point that lies on the curve and test whether the coordinates of this point satisfy the function π of π ₯ equals two π ₯ squared plus
For graph (A), we can use the point with coordinates one, three. Evaluating π of one, we have two multiplied by one squared plus two. Thatβ s two plus two, which is equal to four. And as this
isnβ t equal to three, this tells us that graph (A) is not the correct graph to represent the function π of π ₯ equals two π ₯ squared plus two. If we look at graph (C), however, we can see
that the point with coordinates one, four does lie on this curve.
We can perform a further check by using another point, perhaps the point with coordinates two, 10. Evaluating π of two, we have two multiplied by two squared plus two. Two squared is four.
Multiplying by two gives eight, and adding two gives 10. So this confirms that the point with coordinates two, 10 satisfies the function π of π ₯ equals two π ₯ squared plus two.
So we found that the graph that represents the function π of π ₯ equals two π ₯ squared plus two is graph (C). It has the correct shape, the correct line of symmetry, the correct vertex, and weβ
ve checked that the coordinates of two other points that lie on the curve satisfy the given function.
|
{"url":"https://www.nagwa.com/en/videos/768150451585/","timestamp":"2024-11-14T08:52:53Z","content_type":"text/html","content_length":"252523","record_id":"<urn:uuid:d43bf177-c0dd-4be6-92d2-d242d99e79dc>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00312.warc.gz"}
|
An LCR series ac circuit is at resonance with 10 V each across L.C and
An LCR series ac circuit is at resonance with 10 V each across L.C and R. If the resistance is halved, the respective voltages across L,C and R are
The correct Answer is:D
Since, resistance is half
VR.=i.R2=2i×R2=2×VRR×R2=VR=10 Volt
VL.=i.XL=2VL=20 Volt
VC.=i.XC=2VC=20 Volt
Updated on:21/07/2023
Knowledge Check
• An LCR series ac circuit is at resonance with 10 V each across L, C and R. If the resistance is halved, the respective voltage across L, C and R are
• In an L-C-R series, AC circuit at resonance
Athe capacitive reactance is more than the inductive
Bthe capacitive reactance equals the inductive reactance
Cthe capactive reactance is less than the inductive reactance
Dthe power dissipated is minimum
• In an LCR series circuit the voltages across R,L and C at resonance are 40Vand60V respectively the applied voltage is .
|
{"url":"https://www.doubtnut.com/qna/649445914","timestamp":"2024-11-01T20:37:34Z","content_type":"text/html","content_length":"251762","record_id":"<urn:uuid:1feb4b11-8d85-46c7-aa4a-74ed675fbe6f>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00087.warc.gz"}
|
Free Graphing Calculator 2 for Windows 10 Registration Key
Free Graphing Calculator 2 for Windows 10
February 19, 2024
Registration Key
Registration Code
Registration Key Generator
Registration Code Number
Reg Key
Registration Key Download
Registration Key 2022
Registration Key 2023
Registration Key Free Download
Registration Key Free
Free Graphing Calculator 2 for Windows 10
1: What does the registration key mean?
A registration key is a one-of-a-kind ID generated by the FME Licensing Assistant from system data. It's Safe's way of limiting a single fixed license to a single computer.
2: What is a registration key number?
A registration key is a code of letters and numbers that allows access to one of the many Thomson Reuters products, such as Westlaw, CLEAR, Firm Central, and more.
3: What is the registration key?
Each person will create an individual user account by entering the customer's account number, an online registration key (available from your local dealer), and basic billing and shipping address
information. The account administrator will be the first account created.
The wildly successful iOS app, finally on Windows. Use the in-app-keyboard, or, if on desktop, just type your math. Does far more than most of the paid calculators out there. let alone the free ones.
Features: 1) Scientific Calculator. Simple to grasp and easy to use, but powerful features are available when you need them. Available functions include the following: the usual arithmetic functions
and exponentiation. square root, cube root, nth root, natural log, log base 10, log of arbitrary base, absolute value, factorial, permutations (nPr), combinations (nCr), modulus, random integer, bell
curve, cumulative normal distribution, decimal to fraction. 2) Graphing. Capabilities: Graph up to four equations at once. Graphs are labeled. You can drag the graph or pinch to zoom in or out.
Calculator can find roots and intersections. Graph in polar coordinates. Graph parametric equations. Can graph implicit functions, such as x^2+y^2-4=0. Most calculator apps can’t do this. 3) A unit
converter. With a tap, you can enter the result of your conversion into the calculator. Currently converts different units of the following: acceleration, angle, area, density, distance, energy,
force, mass, power, pressure, speed, temperature, time, and volume. Great for doing physics homework. 4) Constants for scientific calculations. speed of light, strength of gravity at Earth’s surface,
etc.etc.etc.Tapping on a constant will insert it into your calculation. i. e, you don’t have to key in the value. Again, great for doing physics homework. 5) It can make a table of the values of any
function you care to enter. You can choose the starting x value of the table, as well as how much x increases for each successive row. 6) Help screens linked directly to many of the available
functions and constants. Tap the disclosure arrow to see the definition. 7) Forgot the quadratic formula? Or the double-angle formulas for sine and cosine? The math/science reference hits the high
points of various subjects. Currently includes algebra, differential and integral calculus, geometry, trigonometry, vectors, vector calculus, and classical mechanics. 8) Keep track of significant
figures [AKA sig figs] 9) Statistics. enter data and make a histogram, box and whisker plot, or scatter plot with optional regression line. 10) Pro upgrade. graph up to 20 functions at once. Support
for matrices (including multiplication, inversion, row reduction, Eigenvalues/Eigenvectors and more). Explore definitions of derivative and definite integral in the graph. Purchasers will no longer
see ads. If you are viewing this in iTunes, you will see five iPhone screenshots and five iPad screenshots. But even ten shots don’t come close to showing everything this calculator can do. I’d love
to hear your comments or suggestions. You can write me at the email address on the App’s settings tab. Thanks.
Install a license key using a registration key file?
1: Click Install Key after navigating to Tools & Settings > License Management > Plesk License Key.
2: Choose Upload a licence key file.
3: Click OK after providing the path to the key file you downloaded from the email.
|
{"url":"https://7t7exe.com/free-graphing-calculator-2-for-windows-10/","timestamp":"2024-11-09T13:39:28Z","content_type":"text/html","content_length":"49343","record_id":"<urn:uuid:41b23724-fddb-4199-89eb-f0d7d9069e05>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00428.warc.gz"}
|
Performance Metrics — Data Quality (Part 3 of 4)
Hello and welcome to our twelfth article. This issue discusses the importance of Data Quality with reference to backtesting reports in MetaTrader 4 (MT4). All articles are saved at our Medium page
and also on this site, which has a clickable version of the map of all articles: https://theiqt.com/blog.
These articles are based on my experience from consulting and product development at The IQT. Do let us know if there are topics you would like us to cover, questions you would like to resolve, or if
there are insights you would like to share from your own experience.
Data Quality is crucial as it underpins all inference drawn from performance analysis.
1) Data Quality is shown by the type of data model used, the modelling quality and the number of mismatched chart errors.
2) Assumptions over the values of spread and strategy parameters also impact inference on a strategy’s performance.
3) Finally, the larger the Data Quantity we have, the more confident we can be that the trading strategy is stable. Data Quantity is shown by the number of bars / candles, ticks, trading days and
trades entered.
This is the third article dealing with strategy performance metrics but arguably the most important; without good quality data we cannot be sure that our analysis or usage of performance metrics has
a solid foundation. Below is the sample MT4 backtesting report from previous articles. We will use it to discuss the metrics related to Data Quality.
A Sample MT4 Backtesting Report
• “Modelling quality” in the MT4 report shows the overall quality of the data, which is 99% here. MT4 usually uses broker data which has a maximum quality level of 90% but can be much lower. Third
party tools such as Tick Data Suite enable the use of data which has been cleaned (errors removed); data errors sometime arise in real time, and these are not always corrected by brokers later. Tick
Data Suite was recommended to me in 2013 by one of the co-founders of Atom8, a broker which was later taken over by Vantage FX (https://www.forexbrokerz.com/news/
• “Model” refers to the level of detail / granularity of the data. It is best to select “Every tick” so that the most detailed level of data is used. Note that even if “Every tick” is selected, the
broker data may not contain tick data, so may instead use one-minute data to estimate (interpolate) the movement of price within each minute.
• As Birt from Tick Data Suite (TDS) says of MT4’s interpolation routine during backtesting, “if a position has both its SL and its TP within a bar’s price range, it’s a coin toss whether it will hit
stop/loss or take profit.” This is why we recommend using TDS, as by ensuring you have tick data, which is the lowest level of data detail possible, there is no interpolation and you can have full
confidence in your strategy’s results. This is especially important for higher frequency trading strategies which make use of short timeframes (less than hourly), such as the one-minute and
thirty-minute timeframes.
• “Mismatched charts errors” refers to the number of inconsistencies between data from different timeframes, e.g. broker data may have hourly data which does not align with one-minute data. The
report shows 0 such errors for this backtest.
• “Spread” is the level of spread (in points) assumed for the backtest. This can be set to a fixed level e.g. 10, or by choosing 0, the current level of spread in the market is used, as is the case
here; the report says “Current (12)”. An unrealistically low choice of spread may incorrectly give the impression that a strategy is profitable, especially if it involves frequent transactions and
small profit targets, as in scalping strategies.
• “Parameters” shows the input parameters to the trading strategy. The key ones in The IQT’s implementation of the MA crossover strategy are:
- general_percentage = 0.1: this sets the amount of funds to be risked per trade to be 0.1% of the account equity.
- general_max_trades = 1; this sets the maximum number of concurrent trades as 1
- general_slippage = 3; this allows slippage in price fills up to 3 pips
- short_period = 5; this sets the short / fast MA to use 5 periods’ data.
- long_period = 50; this sets the long / slow MA to use 50 periods’ data.
The more data we use in a backtest, the more confident we can be that the strategy predictably generates returns. We can assess Data Quantity in MT4 reports using the following measures (the larger
the better):
• “Period” (Δt in our Fundamental Equation of Trading) tells us the timeframe used, hourly here, and the total time period over which the strategy was tested, which is August — November 2021 (4
months). The number of trading days within this period = 86 (we calculated this number by omitting weekends).
• “Total trades” (N) is the total number of trading positions entered and exited, which is 55 here. The number of transactions used to enter, modify and exit the positions may be larger than this, so
analysis of the list of trades is needed if we wish to investigate transaction costs.
• “Bars in test” is the number of bars / candles in the chart during the period, which is 2164 here.
• “Ticks modelled” is the number of ticks / transactions in the market during the period, which equals 4637600 here.
• The only caveat to the general rule that the larger these measures are, the better, is that regime changes may occur after certain market events. E.g. a trading strategy which assumes gold will
always appreciate would not likely be viable if a cheap way to convert tin into gold were discovered, and increasing deposits of tin were found.
Thanks and happy trading!
|
{"url":"https://www.theiqt.com/post/performance-metrics-data-quality-part-3-of-4","timestamp":"2024-11-03T09:42:45Z","content_type":"text/html","content_length":"1050522","record_id":"<urn:uuid:cf067ab4-0cd9-454a-8386-d651412e3069>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00769.warc.gz"}
|
Basic College Mathematics (10th Edition) Chapters 1-5 - Cumulative Review Exercises - Page 377 11
Based on the order of operations, division comes before addition. Therefore this problem can be split into two parts 1. $18\div6 = 3$ 2. $36+3=39$
You can help us out by revising, improving and updating this answer.
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
{"url":"https://www.gradesaver.com/textbooks/math/other-math/CLONE-547b8018-14a8-4d02-afd6-6bc35a0864ed/chapters-1-5-cumulative-review-exercises-page-377/11","timestamp":"2024-11-07T23:01:37Z","content_type":"text/html","content_length":"50513","record_id":"<urn:uuid:0a2f682f-764c-4cb8-878d-911b1596360b>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00878.warc.gz"}
|
Samacheer Kalvi 12th Physics Solutions Chapter 4 Electromagnetic Induction and Alternating Current
Students who wish to prepare the Samacheer Kalvi Class 12th Physics Solutions Subject Chapter 4 Electromagnetic Induction and Alternating Current can rely on the Tamilnadu State Board Solutions for
Class 12th Physics Solutions Chapter 4 Electromagnetic Induction and Alternating Current Questions and Answers prevailing. Become perfect with the concepts of Samacheer Kalvi Class 12th Physics
Solutions Chapter 4 Electromagnetic Induction and Alternating Current Questions and Answers and score better grades in your exams. Detailed Solutions are provided to the concepts by experts keeping
in mind the latest edition textbooks and syllabus.
Tamilnadu Samacheer Kalvi 12th Physics Solutions Chapter 4 Electromagnetic Induction and Alternating Current
Ace up your preparation by referring to the Samacheer Kalvi Class 12th Physics Solutions Chapter 4 Electromagnetic Induction and Alternating Current and learn all the topics within. Click on the
topic you want to prepare from the Class 12th Chapter 4 Electromagnetic Induction and Alternating Current Questions and Answers and prepare it easily. You can understand your strengths and weaknesses
by practicing the Questions in Samacheer Kalvi Class 12th Physics Solutions PDF.
Samacheer Kalvi 12th Physics Electromagnetic Induction and Alternating Current Textual Evaluation Solved
Samacheer Kalvi 12th Physics Electromagnetic Induction and Alternating Current Multiple Choice Questions
12th Physics Chapter 4 Book Back Answers Question 1.
An electron moves on a straight, line path XY as shown in the figure. The coil abed is adjacent to the path of the electron. What will be the direction of current, if any, induced in the coil? (NEET
– 2015)
(а) The current will reverse its direction as the electron goes past the coil
(b) No current will be induced Electron
(c) abcd
(d) adcb
(a) The current will reverse its direction as the electron goes past the coil
12th Physics Lesson 4 Book Back Answers Question 2.
A thin semi-circular conducting ring (PQR) of radius r is falling with its plane vertical in a horizontal magnetic field B, as shown in the figure. The potential difference developed across the ring
when its speed v, is- (NEET 2014)
(a) Zero
(b) \(\frac {{ Bvπr }^{2}}{ 2 }\)
(c) πrBv and R is at higher potential
(d) 2rBv and R is at higher potential
(d) 2rBv and R is at higher potential
12th Physics 4th Lesson Book Back Answers Question 3.
The flux linked with a coil at any instant t is given by Φ[B] = 10t^2 – 50t + 250. The induced emf at t = 3s is-
(a) -190 V
(b) -10 V
(c) 10 V
(d) 190 V
(b) -10 V
Samacheer Kalvi Guru 12th Physics Question 4.
When the current changes from +2A to -2A in 0.05 s, an emf of 8 V is induced in a coil. The co-efficient of self-induction of the coil is-
(a) 0.2 H
(b) 0.4 H
(c) 0.8 H
(d) 0.1 H
(d) 0.1 H
Samacheerkalvi.Guru 12th Physics Question 5.
The current i flowing in a coil varies with time as shown in the figure. The variation of induced emf with time would be- (NEET-2011)
12th Physics Samacheer Kalvi Question 6.
A circular coil with a cross-sectional area of 4 cm^2 has 10 turns. It is placed at the centre of a long solenoid that has 15 turns/cm and a cross-sectional area of 10 cm^2. The axis of the coil
coincides with the axis of the solenoid. What is their mutual inductance?
(a) 7.54 μH
(b) 8.54 μH
(c) 9.54 μH
(d) 10.54 μH
(a) 7.54 μH
Samacheer Kalvi 12th Physics Question 7.
In a transformer, the number of turns in the primary and the secondary are 410 and 1230, respectively. If the current in primary is 6A, then that in the secondary coil is-
(a) 2 A
(b) 18 A
(c) 12 A
(d) 1 A
(a) 2 A
Question 8.
A step-down transformer reduces the supply voltage from 220 V to 11 V and increase the current from 6 A to 100 A. Then its efficiency is-
(a) 1.2
(b) 0.83
(c) 0.12
(d) 0.9
(b) 0.83
Question 9.
In an electrical circuit, R, L, C and AC voltage source are all connected in series. When L is removed from the circuit, the phase difference between the voltage and current in the circuit is \(\frac
{ 1 }{ 2 }\). Instead, if C is removed from the circuit, the phase difference is again \(\frac { π }{ 3 }\). The power factor of the circuit is- (NEET 2012)
(a) \(\frac { 1 }{ 2 }\)
(b) \(\frac { 1 }{ √ 2 }\)
(c) 1
(d) \(\frac { √ 3 }{ 2 }\)
(c) 1
Question 10.
In a series RL circuit, the resistance and inductive reactance are the same. Then the phase difference between the voltage and current in the circuit is-
(a) \(\frac { π }{ 4 }\)
(b) \(\frac { π }{ 2 }\)
(c) \(\frac { π }{ 6 }\)
(d) zero
(a) \(\frac { π }{ 4 }\)
Question 11.
In a series resonant RLC circuit, the voltage across 100 Ω resistor is 40 V. The resonant frequency co is 250 rad/s. If the value of C is 4 μF, then the voltage across L is-
(a) 600 V
(b) 4000 V
(c) 400 V
(d) 1 V
(c) 400 V
Question 12.
An inductor 20 mH, a capacitor 50 μF and a resistor 40 Ω are connected in series across a source of emf v = 10 sin 340 t. The power loss in AC circuit is-
(a) 0.76 W
(b) 0.89 W
(c) 0.46 W
(d) 0.67 W
(c) 0.46 W
Questions 13.
The instantaneous values of alternating current and voltage in a circuit are i = \(\frac { 1 }{ √ 2 }\) = sin(100πt) A and v = \(\frac { 1 }{ √ 2 }\) sin \(\left(100 \pi t+\frac{\pi}{3}\right)\) V.
The average power in watts consumed in the circuit is-
(IIT Main 2012)
(a) \(\frac { 1 }{ 4 }\)
(b) \(\frac { √3 }{ 4 }\)
(c) \(\frac { 1 }{ 2 }\)
(d) \(\frac { 1 }{ 8 }\)
(d) \(\frac { 1 }{ 8 }\)
Question 14.
In an oscillating LC circuit, the maximum charge on the capacitor is Q. The charge on the capacitor when the energy is stored equally between the electric and magnetic fields is-
(a) \(\frac { Q }{ 2 }\)
(b) \(\frac { Q }{ √3 }\)
(c) \(\frac { Q }{ √2 }\)
(d) \(\frac { Q }{ 2 }\)
(c) \(\frac { Q }{ √2 }\)
Question 15.
\(\frac { 20 }{{ π }^{2}}\) H inductor is connected to a capacitor of capacitance C. The value of C in order to impart maximum power at 50 Hz is-
(a) 50 μF
(b) 0.5 μF
(c) 500 μF
(d) 5 μF
(d) 5 μF
Samacheer Kalvi 12th Physics Electromagnetic Induction and Alternating Current Short Answer Questions
Question 1.
What is meant by electromagnetic induction?
Whenever the magnetic flux linked with a closed coil changes, an emf (electromotive force) is induced and hence an electric current flows in the circuit.
Question 2.
State Faraday’s laws of electromagnetic induction.
First law:
Whenever magnetic flux linked with a closed circuit changes, an emf is induced in the circuit.
Second law:
The magnitude of induced emf in a closed circuit is equal to the time rate of change of magnetic flux linked with the circuit.
Question 3.
State Lenz’s law.
State Lenz’s law:
Lenz’s law states that the direction of the induced current is such that it always opposes the cause responsible for its production.
Question 4.
State Fleming’s right hand rule.
The thumb, index finger and middle finger of right hand are stretched out in mutually perpendicular directions. If the index finger points the direction of the magnetic field and the Electromagnetic
Induction and Alternating Current thumb indicates the direction of motion of the conductor, then the middle finger will indicate the direction of the induced current.
Question 5.
How is Eddy current produced? How do they flow in a conductor?
Even for a conductor in the form of a sheet or plate, an emf is induced when magnetic flux linked with it changes. But the difference is that there is no definite loop or path for induced current to
flow away. As a result, the induced currents flow in concentric circular paths. As these electric currents resemble eddies of water, these are known as Eddy currents. They are also called Foucault
Question 6.
Mention the ways of producing induced emf.
Induced emf can be produced by changing magnetic flux in any of the following ways:
1. By changing the magnetic field B
2. By changing the area A of the coil and
3. By changing the relative orientation 0 of the coil with magnetic field
Question 7.
What for an inductor is used? Give some examples.
Inductor is a device used to store energy in a magnetic field when an electric current flows through it. The typical examples are coils, solenoids and toroids.
Question 8.
What do you mean by self-induction?
If the magnetic flux is changed by changing the current, an emf is induced in that same coil. This phenomenon is known as self-induction.
Question 9.
What is meant by mutual induction?
When an electric current passing through a coil changes with time, an emf is induced in the neighbouring coil. This phenomenon is known as mutual induction.
Question 10.
Give the principle of AC generator.
Alternators work on the principle of electromagnetic induction. The relative motion between a conductor and a magnetic field changes the magnetic flux linked with the conductor which in turn, induces
an emf. The magnitude of the induced emf is given by Faraday’s law of electromagnetic induction and its direction by Fleming’s right hand rule.
Question 11.
List out the advantages of stationary armature-rotating field system of AC generator.
1. The current is drawn directly from fixed terminals on the stator without the use of brush contacts.
2. The insulation of stationary armature winding is easier.
3. The number of sliding contacts (slip rings) is reduced. Moreover, the sliding contacts are used for low-voltage DC Source.
4. Armature windings can be constructed more rigidly to prevent deformation due to any mechanical stress.
Question 12.
What are step-up and step-down transformers?
If the transformer converts an alternating current with low voltage into an alternating current with high voltage, it is called step-up transformer. On the contrary, if the transformer converts
alternating current with high voltage into an alternating current with low voltage, then it is called step-down transformer.
Question 13.
Define average value of an alternating current.
The average value of alternating current is defined as the average of all values of current over a positive half-cycle or negative half-cycle.
Question 14.
How will you define RMS value of an alternating current?
RMS value of alternating current is defined as that value of the steady current which when flowing through a given circuit for a given time produces the same amount of heat as produced by the
alternating current when flowing through the same circuit for the same time.
Question 15.
What are phasors?
A sinusoidal alternating voltage (or current) can be represented by a vector which rotates about the origin in anti-clockwise direction at a constant angular velocity ω. Such a rotating vector is
called a phasor.
Question 16.
Define electric resonance.
When the frequency of the applied alternating source is equal to the natural frequency of the RLC circuit, the current in the circuit reaches its maximum value. Then the circuit is said to be in
electrical resonance.
Question 17.
What do you mean by resonant frequency?
When the frequency of the applied alternating source (ω[r]) is equal to the natural frequency \(\left[\frac{1}{\sqrt{L C}}\right]\) of the RLC circuit, the current in the circuit reaches its maximum
value. Then the circuit is said to be in electrical resonance. The frequency at which resonance takes place is called resonant frequency. Resonant angular frequency, ω[r] = \(\frac { 1 }{ \sqrt { LC
} } \)
Question 18.
How will you define Q-factor?
It is defined as the ratio of voltage across L or C to the applied voltage.
Question 19.
What is meant by wattles current?
The component of current (I[RMS] sin φ), which has a phase angle of \(\frac { π }{ 2 }\) with the voltage is called reactive component. The power consumed is zero. So that it is also known as
‘Wattless’ current.
Question 20.
Give any one definition of power factor.
The power factor is defined as the ratio of true power to the apparent power of an a.c. circuit. It is equal to the cosine of the phase angle between current and voltage in the a.c. circuit.
Question 21.
What are LC oscillations?
Whenever energy is given to a LC circuit, the electrical oscillations of definite frequency are generated. These oscillations are called LC oscillations. During LC oscillations, the total energy
remains constant. It means that LC oscillations take place in accordance with the law of conservation of energy.
Samacheer Kalvi 12th Physics Electromagnetic Induction and Alternating Current Long Answer Questions
Question 1.
Establish the fact that the relative motion between the coil and the magnet induces an emf in the coil of a closed circuit.
Whenever the magnetic flux linked with a closed coil changes, an emf (electromotive force) is induced and hence an electric current flows in the circuit.
The relative motion between the coil and the magnet induces:
In the first experiment, when a bar magnet is placed close to a coil, some of the magnetic field lines of the bar magnet pass through the coil i.e., the magnetic flux is linked with the coil. When
the bar magnet and the coil approach each other, the magnetic flux linked with the coil increases. So this increase in magnetic flux induces an emf and hence a transient electric current flows in the
circuit in one direction (Figure(a)).
At the same time, when they recede away from one another, the magnetic flux linked with the coil decreases. The decrease in magnetic flux again induces an emf in opposite direction and hence an
electric current flows in opposite direction (Figure (b)). So there is deflection in the galvanometer when there is a relative motion between the coil and the magnet.
In the second experiment, when the primary coil P carries an electric current, a magnetic field is established around it. The magnetic lines of this field pass through itself and the neighbouring
secondary coil S.
When the primary circuit is open, no electric current flows in it and hence the magnetic flux linked with the secondary coil is zero (Figure(a)).
However, when the primary circuit is closed, the increasing current builds up a magnetic field around the primary coil. Therefore, the magnetic flux linked with the secondary coil increases. This
increasing flux linked induces a transient electric current in the secondary coil (Figure(b)).
When the electric current in the primary coil reaches a steady value, the magnetic flux linked with the secondary coil does not change and the electric current in the secondary coil will disappear.
Similarly, when the primary circuit is broken, the decreasing primary current induces an electric current in the secondary coil, but in the opposite direction (Figure (c)). So there is deflection in
the galvanometer whenever there is a change in the primary current
Question 2.
Give an illustration of determining direction of induced current by using Lenz’s law.
Illustration 1:
Consider a uniform magnetic field, with its field lines perpendicular to the plane of the paper and pointing inwards. These field lines are represented by crosses (x) as shown in figure (a). A
rectangular metallic frame ABCD is placed in this magnetic field, with its plane perpendicular to the field. The arm AB is movable so that it can slide towards right or left.
If the arm AB slides to our right side, the number of field lines (magnetic flux) passing through the frame ABCD increases and a current is induced. As suggested by Lenz’s law, the induced current
opposes this flux increase and it tries to reduce it by producing another magnetic field pointing outwards i.e., opposite to the existing magnetic field.
The magnetic lines of this induced field are represented by circles in the figure (b). From the direction of the magnetic field thus produced, the direction of the induced current is found to be
anti-clockwise by using right-hand thumb rule.
The leftward motion of arm AB decreases magnetic flux. The induced current, this time, produces a magnetic field in the inward direction i.e., in the direction of the existing magnetic field (figure
(c)). Therefore, the flux decrease is opposed by the flow of induced current. From this, it is found that induced current flows in clockwise direction.
Illustration 2:
Let us move a bar magnet towards the solenoid, with its north pole pointing the solenoid as shown in figure (b). This motion increases the magnetic flux of the coil which in turn, induces an electric
current. Due to the flow of induced current, the coil becomes a magnetic dipole whose two magnetic poles are on either end of the coil.
In this case, the cause producing the induced current is the movement of the magnet. According to Lenz’s law, the induced current should flow in such a way that it opposes the movement of the north
pole towards coil. It is possible if the end nearer to the magnet becomes north pole (figure (b)).
Then it repels the north pole of the bar magnet and opposes the movement of the magnet. Once pole ends are known, the direction of the induced current could be found by using right hand thumb rule.
When the bar magnet is withdrawn, the nearer end becomes south pole which attracts north pole of the bar magnet, opposing the receding motion of the magnet (figure (c)). Thus the direction of the
induced current can be found from Lenz’s law.
Question 3.
Show that Lenz’s law is in accordance with the law of conservation of energy.
Conservation of energy:
The truth of Lenz’s law can be established on the basis of the law of conservation of energy. According to Lenz’s law, when a magnet is moved either towards or away from a coil, the induced current
produced opposes its motion. As a result, there will always be a resisting force on the moving magnet.
Work has to be done by some external agency to move the magnet against this resisting force. Here the mechanical energy of the moving magnet is converted into the electrical energy which in turn,
gets converted into Joule heat in the coil i.e., energy is converted from one form to another.
Question 4.
Obtain an expression for motional emf from Lorentz force.
Motional emf from Lorentz force:
Consider a straight conducting rod AB of length l in a uniform magnetic field \(\vec { B } \) which is directed perpendicularly into the plane of the paper. The length of the rod is normal to the
magnetic field. Let the rod move with a constant velocity \(\vec { v } \) towards right side.
When the rod moves, the free electrons present in it also move with same velocity \(\vec { v } \) in \(\vec { B } \). As a result, the Lorentz force acts on free electrons in the direction from B to
A and is given by the relation
\(\vec { F } \)[B] = -e(\(\vec { v } \) x \(\vec { B } \) ) ……. (1)
The action of this Lorentz force is to accumulate the free electrons at the end A. This accumulation of free electrons produces a potential difference across the rod which in turn establishes an
electric field E directed along BA. Due to the electric field E, the coulomb force starts acting on the free electrons along AB and is given by
\(\vec { F } \)[E] = -e\(\vec { E } \) ……. (2)
The magnitude of the electric field \(\vec { E } \) keeps on increasing as long as accumulation of electrons at the end A continues. The force \(\vec { F } \)[E] also increases until equilibrium is
reached. At equilibrium, the magnetic Lorentz force \(\vec { F } \)[B] and the coulomb force \(\vec { F } \)[E] balance each other and no further accumulation of free electrons at the end A takes
place, i.e.,
\(\left| \vec { { F }_{ B } } \right| \) = \(\left| \vec { { F }_{ E } } \right| \)
vB sin 90° = E
vB = E ……. (3)
The potential difference between two ends of the rod is
Figure: Motional emf from Lorentz force
V = El
V = vBl
Thus the Lorentz force on the free electrons is responsible to maintain this . potential difference and hence produces an emf
ε = Blv ….. (4)
As this emf is produced due to the movement of the rod, it is often called as motional emf.
Question 5.
Using Faraday’s law of electromagnetic induction, derive an equation for motional emf.
Motional emf from Faraday’s law:
Let us consider a rectangular conducting loop of width l in a uniform magnetic field \(\vec { B } \) which is perpendicular to the plane of the loop and is directed inwards. A part of the loop is in
the magnetic field while the remaining part is outside the field.
Figure: Motional emf from Faraday’s law
When the loop is pulled with a constant velocity \(\vec { v } \) to the right, the area of the portion of the loop within the magnetic field will decrease. Thus, the flux linked with the loop will
also decrease. According to Faraday’s law, an electric current is induced in the loop which flow’s in a direction so as to oppose the pull of the loop.
Let x be the length of the loop which is still within the magnetic field, then its area is lx. The magnetic flux linked with the loop is
As this magnetic flux decreases due to the movement of the loop, the magnitude of the induced emf is given by
ε = \(\frac {{ dΦ }_{B}}{ dt }\) = \(\frac { d }{ dt }\) (Blx)
Here, both B and l are constants. Therefore,
ε = Bl \(\frac { dx}{ dl }\) = Blv …… (2)
where v = \(\frac { dx}{ dt }\) is the velocity of the loop. This emf is known as motional emf since it is produced due to the movement of the loop in the magnetic field.
Question 6.
Give the uses of Foucault current.
Though the production of eddy current is undesirable in some cases, it is useful in some other cases. A few of them are
1. Induction stove
2. Eddy current brake
3. Eddy current testing
4. Electromagnetic damping
1. Induction stove:
Induction stove is used to cook the food quickly and safely with less energy consumption. Below the cooking zone, there is a tightly wound coil of insulated wire. The cooking pan made of suitable
material, is placed over the cooking zone.
When the stove is switched on, an alternating current flowing in the coil produces high frequency alternating magnetic field which induces very strong eddy currents in the cooking pan. The eddy
currents in the pan produce so much of heat due to Joule heating which is used to cook the food.
2. Eddy current brake:
This eddy current braking system is generally used in high speed trains and roller coasters. Strong electromagnets are fixed just above the rails. To stop the train, electromagnets are switched on.
The magnetic field of these magnets induces eddy currents in the rails which oppose or resist the movement of the train. This is Eddy current linear brake.
In some cases, the circular disc, connected to the wheel of the train through a common shaft, is made to rotate in between the poles of an electromagnet. When there is a relative motion between the
disc and the magnet, eddy currents are induced in the disc which stop the train. This is Eddy current circular brake.
3. Eddy current testing:
It is one of the simple non-destructive testing methods to find defects like surface cracks and air bubbles present in a specimen. A coil of insulated wire is given an alternating electric current so
that it produces an alternating magnetic field.
When this coil is brought near the test surface, eddy current is induced in the test surface. The presence of defects causes the change in phase and amplitude of the eddy current that can be detected
by some other means. In this way, the defects present in the specimen are identified.
4. Electro magnetic damping:
The armature of the galvanometer coil is wound on a soft iron cylinder. Once the armature is deflected, the relative motion between the soft iron cylinder and the radial magnetic field induces eddy
current in the cylinder. The damping force due to the flow of eddy current brings the armature to rest immediately and then galvanometer shows a steady deflection. This is called electromagnetic
Question 7.
Define self-inductance of a coil interns of (i) magnetic flux and (ii) induced emf.
Self-inductance or simply inductance of a coil is defined as the flux linkage of the coil when 1A current flows through it.
When the current i changes with time, an emf is induced in it. From Faraday’s law of electromagnetic induction, this self-induced emf is given by
ε = –\(\frac{d\left(\mathrm{N} \Phi_{\mathrm{B}}\right)}{d t}\) = –\(\frac { d(Li)}{ dt }\)
∴ ε = -L\(\frac { di}{ dt }\) or L = \(\frac { -ε}{ di/dt }\)
The negative sign in the above equation means that the self-induced emf always opposes the change in current with respect to time. If \(\frac { di}{ dt }\) = 1 As^-1, then L= -ε. Inductance of a coil
is also defined as the opposing emf induced in the coil when the rate of change of current through the coil is 1 A s^-1.
Question 8.
How will you define the unit of inductance?
Unit of inductance: Inductance is a scalar and its unit is Wb A^-1 or V s A^-1. It is also measured in henry (H).
1 H = 1 Wb A^-1 = 1 V s A^-1
The dimensional formula of inductance is M L^2 T^-2A^-2.
If i = 1 A and NΦ[B] = 1 Wb turns, then L = 1 H.
Therefore, the inductance of the coil is said to be one henry if a current of 1 A produces unit flux linkage in the coil.
If \(\frac { di}{ dt }\) = 1 As^-1 and ε = -1 V, then L = 1 H.
Therefore, the inductance of the coil is one henry if a current changing at the rate of 1 A s^-1 induces an opposing emf of 1 V in it.
Question 9.
What do you understand by self-inductance of a coil? Give its physical significance.
Self-inductance or simply inductance of a coil is defined as the flux linkage of the coil when 1A current flows through it.
When the current i changes with time, an emf is induced in it. From Faraday’s law of electromagnetic induction, this self-induced emf is given by
Physical significance of inductance:
When a circuit is switched on, the increasing current induces an emf which opposes the growth of current in a circuit. Likewise, when circuit is broken, the decreasing current induces an emf in the
reverse direction. This emf now opposes the decay of current.
Figure: Induced emf ε opposes the changing current i
Thus, inductance of the coil opposes any change in current and tries to maintain the original state.
Question 10.
Assuming that the length of the solenoid is large when compared to its diameter, find the equation for its inductance.
Self-inductance of a long solenoid:
Consider a long solenoid of length l and cross-sectional area A. Let n be the number of turns per unit length (or turn density) of the solenoid. When an electric current i is passed through the
solenoid, a magnetic field is produced by it which is almost uniform and is directed along the axis of the solenoid. The magnetic field at any point inside the solenoid is given by
B = μ[0]ni
As this magnetic field passes through the solenoid, the windings of the solenoid are linked by the field lines. The magnetic flux passing through each turn is
The total magnetic flux linked or flux linkage of the solenoid with N turns (the total number of turns N is given by N = nl) is
NΦ[B] = n (nl) (μ[0]ni)A
NΦ[B] = (μ[0]n^2Al)i ….. (1)
From the self induction
NΦ[B] = LI ….. (2)
Comparing equations (1) and (2), we have L = μ[0]n^2Al
From the above equation, it is clear that inductance depends on the geometry of the solenoid (turn density n, cross-sectional area A, length l) and the medium present inside the solenoid. If the
solenoid is filled with a dielectric medium of relative permeability μ[r], then
L = μ[0]
L = μn[0]μ[r]n^2Al
Question 11.
An inductor of inductance L carries an electric current i. How much energy is stored while establishing the current in it?
Energy stored in an inductor:
Whenever a current is established in the circuit, the inductance opposes the growth of the current. In order to establish a current in the circuit, work is done against this opposition by some
external agency. This work done is stored as magnetic potential energy.
Let us assume that electrical resistance of the inductor is negligible and inductor effect alone is considered. The induced emf e at any instant t is
ε = -L\(\frac { di}{ dt }\) …… (1)
Let dW be work done in moving a charge dq in a time dt against the opposition, then
dW = -εdq = -εidi = εidi [∵dq = idt]
Substituting for s value from equation (1)
= – \(\left(-\mathrm{L} \frac{d i}{d t}\right)\) idt
dW = Lid …… (2)
Total work done in establishing the current i is
This work done is stored as magnetic potential energy.
U[B] = \(\frac { 1 }{ 2 }\) Li^2 …….. (4)
Question 12.
Show that the mutual inductance between a pair of coils is same (M[12] = M[21]).
Mutual induction:
When an electric current passing through a coil changes with time, an emf is induced in the neighbouring coil. This phenomenon is known as mutual induction and the emf is called mutually induced emf.
Consider two coils which are placed close to each other. If an electric current i[1] is sent through coil 1, the magnetic field produced by it is also linked with coil 2. Let Φ[21] be the magnetic
flux linked with each turn of the coil 2 of N[2] turns due to coil 1, then the total flux linked with coil 2 (N[2]Φ[21]) is proportional to the current i[1] in the coil 1.
The constant of proportionality M[21] is the mutual inductance of the coil 2 with respect to coil 1. It is also called as coefficient of mutual induction. If i[1] = 1A, then M[21] = N[2]Φ[21].
Therefore, the mutual inductance M[21] is defined as the flux linkage of the coil 2 when 1A current flows through coil 1. When the current changes with time, an emf ε[2] is induced in coil 2. From
Faraday’s law of electromagnetic induction, this mutually induced emf ε[2] is given by
The negative sign in the above equation shows that the mutually induced emf always opposes the change in current i, with respect to time. If \(\frac { di }{ dt }\) = 1 As^-1, then M[21] = -ε[2].
Mutual inductance M[21], is also defined as the opposing emf induced in the coil 2 when the rate of change of current through the coil 1 is 1 As^-1. Similarly, if an electric current i[2] through
coil 2 changes with time, then emf ε[1] is induced in coil 1. Therefore,
where M[12] is the mutual inductance of the coil 1 with respect to coil 2. It can be shown that for a given pair of coils, the mutual inductance is same, i.e., M[21] = M[12] = M.
In general, the mutual induction between two coils depends on size, shape, the number of turns of the coils, their relative orientation and permeability of the medium.
Question 13.
How will you induce an emf by changing the area enclosed by the coil?
Induction of emf by changing the area of the coil:
Consider a conducting rod of length 1 moving with a velocity v towards left on a rectangular metallic framework. The whole arrangement is placed in a uniform magnetic field \(\vec { B } \) whose
magnetic lines are perpendicularly directed into the plane of the paper. As the rod moves from AB to DC in a time dt, the area enclosed by the loop and hence the magnetic flux through the loop
The change in magnetic flux in time dt is
dΦ[B] = B x change in area
B x Area ABCD
= Blvdt since Area ABCD = l(vdt)
or \(\frac {{ dΦ }_{B}}{ dt }\) = Blv
As a result of change in flux, an emf is generated in the loop. The magnitude of the induced emf is
ε = \(\frac {{ dΦ }_{B}}{ dt }\) = Blv
This emf is called motional emf. The direction of induced current is found to be clockwise from Fleming’s right hand rule.
Question 14.
Show mathematically that the rotation of a coil in a magnetic field over one rotation induces an alternating emf of one cycle.
Induction of emf by changing relative orientation of the coil with the magnetic field:
Consider a rectangular coil of N turns kept in a uniform magnetic field \(\vec { B } \) figure (a). The coil rotates in anti-clockwise direction with an angular velocity ω about an axis,
perpendicular to the field. At time = 0, the plane of the coil is perpendicular to the field and the flux linked with the coil has its maximum value Φ[m] = BA (where A is the area of the coil).
In a time t seconds, the coil is rotated through an angle θ (= ωt) in anti-clockwise direction. In this position, the flux linked is Φ[m] cos ωt, a component of Φ[m] normal to the plane of the coil
(figure (b)). The component parallel to the plane (Φ[m] sin ωt) has no role in electromagnetic induction. Therefore, the flux linkage at this deflected position is NΦ[B] = NΦ[m] cos ωt. According to
Faraday’s law, the emf induced at that instant is
ε= \(\frac { d }{ dt }\) (NΦ[B] ) = \(\frac { d }{ dt }\) (NΦ[m] cos ωt)
= -NΦ[m] (-sin ωt)ω = NΦ[m] ω sin ωt
When the coil is rotated through 90° from initial position, sin ωt = 1, Then the maximum value of induced emf is
ε[m] = NΦ[m]ω = NBAω since Φ[m] = BA
Therefore, the value of induced emf at that instant is then given by
ε = ε[m] sin ωt
It is seen that the induced emf varies as sine function of the time angle ωt. The graph between – induced emf and time angle for one rotation of coil will be a sine curve and the emf varying in this
manner is called sinusoidal emf or alternating emf.
Question 15.
Elaborate the standard construction details of AC generator.
lternator consists of two major parts, namely stator and rotor. As their names suggest, stator is stationary while rotor rotates inside the stator. In any standard construction of commercial
alternators, the armature winding is mounted on stator and the field magnet on rotor. The construction details of stator, rotor and various other components involved in them are given below.
(i) Stator:
The stationary part which has armature windings mounted in it is called stator. It has three components, namely stator frame, stator core and armature winding.
Stator frame:
This is the outer frame used for holding stator core and armature windings in proper position. Stator frame provides best ventilation with the help of holes provided in the frame itself.
Stator core:
Stator core or armature core is made up of iron or steel alloy. It is a hollow cylinder and is laminated to minimize eddy current loss. The slots are cut on inner surface of the core to accommodate
armature windings.
Armature winding:
Armature winding is the coil, wound on slots provided in the armature core. One or more than one coil may be employed, depending on the type of alternator. Two types of windings are commonly used.
They are (i) single-layer winding and (ii) double-layer winding. In single-layer winding, a slot is occupied by a coil as a single layer. But in double-layer winding, the coils are split into two
layers such as top and bottom layers.
(ii) Rotor:
Rotor contains magnetic field windings. The magnetic poles are magnetized by DC source. The ends of field windings are connected to a pair of slip rings, attached to a common shaft about which rotor
rotates. Slip rings rotate along with rotor. To maintain connection between the DC source and field windings, two brushes are used which . continuously slide over the slip rings.
There are 2 types of rotors used in alternators:
1. salient pole rotor
2. cylindrical pole rotor.
1. Salient pole rotor:
The word salient means projecting. This rotor has a number of projecting poles having their bases riveted to the rotor. It is mainly used in low-speed alternators.
2. Cylindrical pole rotor:
This rotor consists of a smooth solid cylinder. The slots are cut on the outer surface of the cylinder along its length. It is suitable for very high speed alternators.
The frequency of alternating emf induced is directly proportional to the rotor speed. In order to maintain the frequency constant, the rotor must run at a constant speed. These are standard
construction details of alternators.
Question 16.
Explain the working of a single-phase AC generator with necessary diagram.
Single phase AC generator: In a single phase AC generator, the armature conductors are connected in series so as to form a single circuit which generates a single-phase alternating emf and hence it
is called single-phase alternator.
The simplified version of a AC generator is discussed here. Consider a stator core consisting of 2 slots in which 2 armature conductors PQ and RS are mounted to form single-turn rectangular loop
PQRS. Rotor has 2 salient poles with field windings which can be magnetized by means of DC source.
The loop PQRS is stationary and is perpendicular to the plane of the paper. When field windings are excited, magnetic field is produced around it. The direction of magnetic field passing through the
armature core. Let the field magnet be rotated in clockwise direction by the prime mover. The axis of rotation is perpendicular to the plane of the paper.
Assume that initial position of the field magnet is horizontal. At that instant, the direction of magnetic field is perpendicular to the plane of the loop PQRS. The induced emf is zero. This is
represented by origin O in the graph between induced emf and time angle.
When field magnet rotates through 90°, magnetic field becomes parallel to PQRS. The induced cmfs across PQ and RS would become maximum. Since they are connected in series, emfs are added up and the
direction of total induced emf is given by Fleming’s right hand rule. Care has to be taken while applying this rule; the thumb indicates the direction of the motion of the conductor with respect to
For clockwise rotating poles, the conductor appears to be rotating anti-clockwise. Hence, thumb should point to the left. The direction of the induced emf is at right angles to the plane of the
paper. For PQ, it is downwards B and for RS upwards. Therefore, the current flows along PQRS. The point A in the graph represents this maximum emf.
For the rotation of 180° from the initial position, the field is again perpendicular to PQRS and the induced emf becomes zero. This is represented by point B. The field magnet becomes again parallel
to PQRS for 270° rotation of field magnet. The induced emf is maximum but the direction is reversed. Thus the current flows along SRQP This is represented by point C.
On completion of 360°, the induced emf becomes zero and is represented by the point D. From the graph, it is clear that emf induced in PQRS is alternating in nature. Therefore, when field magnet
completes one rotation, induced emf in PQRS finishes one cycle. For this construction, the frequency of the induced emf depends on the speed at which the field magnet rotates.
Question 17.
How are the three different emfs generated in a three-phase AC generator? Show the graphical representation of these three emfs.
Three-phase AC generator:
Some AC generators may have more than one coil in the armature core and each coil produces an alternating emf. In these generators, more than one emf is produced. Thus they are called poly-phase
generators. If there are two alternating emfs produced in a generator, it is called two-phase generator. In some AC generators, there are three separate coils, which would give three separate emfs.
Hence they are called three-phase AC generators.
In the simplified construction of three-phase AC generator, the armature core has 6 slots, cut on its inner rim. Each slot is 60° away from one another. Six armature conductors are mounted in these
slots. The conductors 1 and 4 are joined in series to form coil 1. The conductors 3 and 6 form coil 2 while the conductors 5 and 2 form coil 3. So, these coils are rectangular in shape and are 120°
apart from one another.
The initial position of the field magnet is horizontal and field direction is perpendicular to the plane of the coil 1. As it is seen in single phase AC generator, when field magnet is rotated from
that position in clockwise direction, alternating emf ε[1] in coil 1 begins a cycle from origin O.
The corresponding cycle for alternating emf ε[2] in coil 2 starts at point A after field magnet has rotated through 120°. Therefore, the phase difference between ε[1] and ε[2] is 120°. Similarly, emf
ε[3] in coil 3 would begin its cycle at point B after 240° rotation of field magnet from initial position. Thus these emfs produced in the three phase AC generator have 120° phase difference between
one another.
Question 18.
Explain the construction and working of transformer.
Construction and working of transformer:
The principle of transformer is the mutual induction between two coils. That is, when an electric current passing through a coil changes with time, an emf is induced in the neighbouring coil.
In the simple construction of transformers, there are two coils of high mutual inductance wound over the same transformer core. The core is generally laminated and is made up of a good magnetic
material like silicon steel. Coils are electrically insulated but magnetically linked via transformer core.
The coil across which alternating voltage is applied is called primary coil P and the coil from which output power is drawn out is called secondary coil S. The assembled core and coils are kept in a
container which is filled with suitable medium for better insulation and cooling purpose.
If the primary’ coil is connected to a source of alternating voltage, an alternating magnetic flux is set up in the laminated core. If there is no magnetic flux leakage, then whole of magnetic flux
linked with primary coil is also linked with secondary coif This means that rate at which magnetic flux changes through each turn is same for both primary and secondary coils. As a result of flux
change, emf is induced in both primary and secondary coils. The emf induced in the primary coil ε[p] is almost equal and opposite to the applied voltage υ[p] and is given by
υ[p] = ε[p] = -N[p] \(\frac {{ dΦ }_{B}}{ dt }\) …….. (1)
The frequency of alternating magnetic flux in the core is same as the frequency of the applied voltage. Therefore, induced emf in secondary will also have same frequency as that of applied voltage.
The emf induced in the secondary coil eg is given by
ε[s] = -N[s] \(\frac {{ dΦ }_{B}}{ dt }\)
where N[p] and N[s] are the number of turns in the primary and secondary coil, respectively. If the secondary circuit is open, then ε[s] = υ[s] where υ[s] is the voltage across secondary coil.
υ[s] ε[s] = -N[s] \(\frac {{ dΦ }_{B}}{ dt }\) ……… (2)
From equation (1) and (2),
\(\frac {{ υ }_{s}}{{ ε }{s}}\) = \(\frac {{ N }_{s}}{{ N }{p}}\) = K …….. (3)
This constant K is known as voltage transformation ratio. For an ideal transformer,
Input power υ[p] i[p] = Output power υ[s]i[s]
where iυ[p] and i[s] are the currents in the primary and secondary coil respectively. Therefore,
\(\frac {{ υ }_{s}}{{ υ }{p}}\) = \(\frac {{ N }_{s}}{{ N }{p}}\) = \(\frac {{ i }_{p}}{{ i }{s}}\)
Equation (4) is written in terms of amplitude of corresponding quantities,
\(\frac {{ V }_{s}}{{ V }{p}}\) = \(\frac {{ N }_{s}}{{ N }{p}}\) = \(\frac {{ I }_{p}}{{ I }{s}}\) = K ……. (4)
(i) If N[s] > N[p] ( or K > 1)
∴ V[s] > V[p] and I[s] < I[p].
This is sthe case of step-up transformer in which voltage is decreased and the corresponding current is decreased.
(ii) If N[s] < N[p] (or K < 1)
∴ V[s] < V[p] and I[s] > I[p]
This is step-down transformer where voltage is decreased and the current is increased.
Question 19.
Mention the various energy losses in a transformer.
Energy losses in a transformer: Transformers do not have any moving parts so that its efficiency is much higher than that of rotating machines like generators and motors. But there are many factors
which lead to energy loss in a transformer.
(i) Core loss or Iron loss:
This loss takes place in transformer core. Hysteresis loss and eddy current loss are known as core loss or Iron loss. When transformer core is magnetized and demagnetized repeatedly by the
alternating voltage applied across primary coil, hysteresis takes place due to which some energy is lost in the form of heat.
Hysteresis loss is minimized by using steel of high silicon content in making transformer core. Alternating magnetic flux in the core induces eddy currents in it. Therefore there is energy loss due
to the flow of eddy current, called eddy current loss which is minimized by using very thin laminations of transformer core.
(ii) Copper loss:
Transformer windings have electrical resistance. When an electric current flows through them, some amount of energy is dissipated due to Joule heating. This energy loss is called copper loss which is
minimized by using wires of larger diameter.
(iii) Flux leakage:
Flux leakage happens when the magnetic lines of primary coil arc not completely linked with secondary coil. Energy loss due to this flux leakage is minimized by winding coils one over the other.
Question 20.
Give the advantage of AC in long distance power transmission with an example.
Advantages of AC in long distance power transmission:
Electric power is produced in a t large scale at electric power stations with the help of AC generators. These power stations are classified based on the type of fuel used as thermal, hydro electric
and nuclear power stations. Most of these stations are located at remote places.
Hence the electric power generated is transmitted over long distances through transmission lines to reach towns or cities where it is actually consumed. This process is called power transmission. But
there is a difficulty during power transmission. A sizable fraction of electric power is lost due to Joule heating (i^2R) in the transmission lines which are hundreds of kilometer long.
This power loss can be tackled either by reducing current i or by reducing resistance R of the transmission lines. The resistance R can be reduced with thick wires of copper or aluminium. But this
increases the cost of production of transmission lines and other related expenses. So this way of reducing power loss is not economically viable.
Since power produced is alternating in nature, there is a way out. The most important property of alternating voltage is that it can be stepped up and stepped down by using transformers could be
exploited in reducing current and thereby reducing power losses to a greater extent.
At the transmitting point, the voltage is increased and the corresponding current is decreased by using step-up transformer. Then it is transmitted through transmission lines. This reduced current at
high voltage reaches the destination without any appreciable loss.
At the receiving point, the voltage is decreased and the current is increased to appropriate values by using step-down transformer and then it is given to consumers. Thus power transmission is done
efficiently and economically.
An electric power of 2 MW is transmitted to a place through transmission lines of total resistance, say R = 40 Ω, at two different voltages. One is lower voltage (10 kV) and the other is higher (100
kV). Let us now calculate and compare power losses in these two cases.
Case I:
P = 2 MW; R == 40 Ω; V = 10 kV Power,
Power, P = VI
∴ Current, I = \(\frac { P }{ V }\) = \(\frac {{ 2 × 10 }^{6}}{{ 10 × 10 }^{3}}\) 200 A
Power loss = Heat produced = I^2R = (200)^2 × 40 = 1.6 × 10^6 W
% of power loss =\(\frac {{ 1.6 × 10 }^{6}}{{ 2 × 10 }^{6}}\) × 100% = 0.8 × 100% = 80%
Case II:
P = 2 MW; R == 40 Ω; V = 100 kV
∴ Current, I = \(\frac { P }{ V }\) = \(\frac {{ 2 × 10 }^{6}}{{ 100 × 10 }^{3}}\) 20 A
Power loss = Heat produced = I^2R = (20)^2 × 40 = 0.016 × 10^6 W
% of power loss =\(\frac {{ 0.6 × 10 }^{6}}{{ 2 × 10 }^{6}}\) × 100% = 0.008 × 100% = 0.8%
Question 21.
Find out the phase relationship between voltage and current in a pure inductive circuit.
AC circuit containing only an inductor:
Consider a circuit containing a pure inductor of inductance L connected across an alternating voltage source. The alternating voltage is given by the equation.
υ = V[m] sin ωt …(1)
The alternating current flowing through the inductor induces a self-induced emf or back emf in the circuit. The back emf is given by
Back emf, ε -L\(\frac { di }{ dt }\)
By applying Kirchoff’s loop rule to the purely inductive circuit, we get
υ + ε = 0
V[m] sin ωt = L\(\frac { di }{ dt }\)
di = L\(\frac {{ V }_{m}}{ L }\) sin ωt dt
i = \(\frac {{ V }_{m}}{ L }\) \(\int { sin } \) ωt dt = \(\frac{{ V }_{m}}{ Lω }\) (-cos ωt) + constant
The integration constant in the above equation is independent of time. Since the voltage in the circuit has only time dependent part, we can set the time independent part in the current (integration
constant) into zero.
where \(\frac{{ V }_{m}}{ Lω }\) = I[m], the peak value of the alternating current in the circuit. From equation (1) and (2), it is evident that current lags behind the applied voltage by \(\frac{π}{
2 }\) in an inductive circuit.
This fact is depicted in the phasor diagram. In the wave diagram also, it is seen that current lags the voltage by 90°.
Inductive reactance X[L]:
The peak value of current I[m] is given by I[m] = \(\frac{{ V }_{m}}{ Lω }\) . Let us compare this equation with I[m] = \(\frac{{ V }_{m}}{ R }\) from resistive circuit. The equantity ω[L] Plays the
same role as the resistance in resistive circuit. This is the resistance offered by the inductor, called inductive reactance (X[L]). It is measured in ohm.
X[L] = ω[L]
The inductive reactance (X[L]) varies directly as the frequency.
X[L] = 2πfL …….. (3)
where ƒ is the frequency of the alternating current. For a steady current, ƒ= 0. Therefore, X[L] = 0. Thus an ideal inductor offers no resistance to steady DC current.
Question 22.
Derive an expression for phase angle between the applied voltage and current in a series RLC circuit.
AC circuit containing a resistor, an inductor and a capacitor in series – Series RLC
Consider a circuit containing a resistor of resistance R, a inductor of inductance L and a capacitor of capacitance C connected across an alternating voltage source. The applied alternating voltage
is given by the equation.
υ = V[m] sin ωt …… (1)
Let i be the resulting circuit current in the circuit at that instant. As a result, the voltage is developed across R, L and C.
We know that voltage across R (V[R]) is in phase with i, voltage across L (V[L]) leads i by π/2 and voltage across C (V[C]) lags i by π/2.
The phasor diagram is drawn with current as the reference phasor. The current is represented by the phasor
\(\vec { OI } \), V[R] by \(\vec { OA } \) ; V[L] by \(\vec { OB } \) and V[C] by \(\vec { OC } \).
The length of these phasors are
OI = I[m], OA = I[m]R, OB = I[m],X[L]; OC = I[m]X[c]
The circuit is cither effectively inductive or capacitive or resistive that depends on the value of V[1] or V[c] Let us assume that V[L] > V[C]. so that net voltage drop across L – C combination is V
[L] < V[C] which is represented by a phasor \(\vec { AD } \).
By parallelogram law, the diagonal \(\vec { OE } \) gives the resultant voltage u of VR and (V[L] – V[C] ) and its length OE is equal to V[m]. Therefore,
Z is called impedance of the circuit which refers to the effective opposition to the circuit current by the series RLC circuit. The voltage triangle and impedance triangle are given in the graphical
From phasor diagram, the phase angle between n and i is found out from the following relation
Special cases Figure: Phasor diagram for a series
(i) If X[L] > X[C], (X[L] – X[C]) is positive and phase angle φ
is also positive. It means that the applied voltage leads the current by φ (or current lags behind voltage by φ). The circuit is inductive.
∴ υ = V[m] sin ωt; i = I[m] sin(ωt + φ)
(ii) If X[L] < X[C], (X[L] – X[C]) is negative and φ is also negative. Therefore current leads voltage by φ and the circuit is capacitive.
∴ υ = Vm sin ωt; i = Im sin(ωt + φ)
(iii) If X[L] = X[C], φ is zero. Therefore current and voltage are in the same phase and the circuit is resistive.
∴ υ = V[m] sin ωt; i = I[m] sin ωt
Question 23.
Define inductive and capacitive reactance. Give their units.
Inductive reactance X[L]:
The peak value of current I[m] is given by I[m] = \(\frac{{ V }_{m}}{ Lω }\). Let us compare
this equation with I[m] \(\frac{{ V }_{m}}{ R }\) from resistive circuit. The quantity ωL plays the same role as the resistance R in resistive circuit. This is the resistance offered by the
capacitor, called capacitive reactance (X[L]). It measured in ohm.
X[L] = ωL
Capacitive reactance X[C]:
The peak value of current I is given by I[m] = \(\frac{\mathrm{v}_{\mathrm{m}}}{1 / \mathrm{c} \omega}\). Let us compare this equation with I[m] = \(\frac{{ V }_{m}}{ R }\) from resistive circuit.
The quantity \(\frac { 1 }{ ωC }\) plays the same role as the resistance R in resistive circuit. This is the resistance offered by the capacitor, called capacitive reactance (X[C]). It measured in
X[C] = \(\frac{ 1 }{ ωC }\).
Question 24.
Obtain an expression for average power of AC over a cycle. Discuss its special cases. Power of a circuit is defined as the rate of consumption of electric energy in that circuit.
It is given by the product of the voltage and current. In an AC circuit, the voltage and current vary continuously with time. Let us first calculate the power at an instant and then it is averaged
over a complete cycle.
The alternating voltage and alternating current in the series RLC circuit at an instant are given by
υ = V[m] sin ωt and i = I[m] sin (ωt + 4>)
where φ is the phase angle between υ and i. The instantaneous power is then written as
P = υi = V[m ]I[m] sin ωt sin(ωt + φ)
= V[m ]I[m] sin ωt (sin ωt cos φ – cos ωt sin φ)
P = V[m ]I[m] (cos φ sin^2 ωt – sin ωt cos ωt sin φ) …… (1)
Here the average of sin^2 ωt over a cycle is \(\frac { 1 }{ 2 }\) and that of sin ωt cos ωt is zero. Substituting these values, we obtain average power over a cycle.
P[av] = V[m ]I[m] cos φ x \(\frac { 1 }{ 2 }\) = \(\frac {{ V }_{m}}{ √2 }\) \(\frac {{ I }_{m}}{ √2 }\) cos φ
P[av] = V[RMS ]I[RMS] cos φ …… (2)
where V[RMS ]I[RMS] is called apparent power and cos φ is power factor. The average power of an AC circuit is also known as the true power of the circuit.
Special Cases:
(i) For a purely resistive circuit, the phase angle between voltage and current is zero and cos
φ = 1.
∴ P[av] = V[RMS ]I[RMS
](ii) For a purely inductive or capacitive circuit, the phase angle is ± \(\frac { π }{ 2 }\) and cos \(\left(\pm \frac{\pi}{2}\right)\) = 0
∴ P[av] = 0
(iii) For series RLC circuit, the phase angle φ = tan^-1 \(\left(\frac{\mathrm{x}_{\mathrm{L}}-\mathrm{x}_{\mathrm{c}}}{\mathrm{R}}\right)\)
∴ P[av] = V[RMS ]I[RMS] cos φ
(iv) For series RLC circuit at resonance, the phase angle is zero and cos φ = 1.
∴ P[av] = V[RMS ]I[RMS]
Question 25.
Show that the total energy is conserved during LC oscillations.
Conservation of energy in LC oscillations: During LC oscillations in LC circuits, the energy of the system oscillates between the electric field of the capacitor and the magnetic field of the
inductor. Although, these two forms of energy vary with time, the total energy remains constant. It means that LC oscillations take place in accordance with the law of conservation of energy.
Total energy,
Let us consider 3 different stages of LC oscillations and calculate the total energy of the system.
Case I:
When the charge in the capacitor, q = Q[m] and the current through the inductor, i = 0, the total energy is given by
The total energy is wholly electrical.
Case II:
When charge = 0; current = I[m], the total energy is
The total energy is wholly electrical.
Case III:
When charge = q; current = i, the total energy is
Since q = Q[m] cos ωt, i = \(\frac { bq }{ dt }\) = Q[m]ω sin ωt. The negative sign in current indicates that the charge in the capacitor in the capacitor decreases with time.
From above three cases, it is clear that the total energy of the system remains constant.
Question 26.
Prove that energy is conserved during electromagnetic induction.
The mechanical energy of the spring-mass system is given by
The energy E remains constant for varying values of x and v. Differentiating E with respect to time, we get
This is the differential equation of the oscillations of the spring-mass system. The general solution of equation (2) is of the form
x(t) = X[m] cos (ωt + φ) …… (3)
where X[m] is the maximum value of x(t), ω, the angular frequency and φ, the phase constant. Similarly, the electromagnetic energy of the LC system is given by
Differentiating U with respect to time, we get
Equation (2) and (5) are proved the energy of electromagnetic induction is conserved.
Question 27.
Compare the electromagnetic oscillations of LC circuit with the mechanical oscillations of block spring system to find the expression for angular frequency of LC oscillators mathematically.
The mechanical energy of the spring-mass system is given by
The energy E remains constant for varying values of x and v. Differentiating E with respect to time, we get
This is the differential equation of the oscillations of the spring-mass system. The general solution of equation (2) is of the form
x(t) = X[m] cos (ωt + φ) …… (3)
where X[m] is the maximum value of x(t), ω, the angular frequency and φ, the phase constant. Similarly, the electromagnetic energy of the LC system is given by
Differentiating U with respect to time, we get
Equation (2) and (5) are proved the energy of electromagnetic induction is conserved.
q(t) = Q[m] cos (ωt + φ) …… (6)
where Q[m] is the maximum value of q(t), ω, the angular frequency and φ, the phase constant.
Current in the LC circuit:
The current flowing in the LC circuit is obtained by differentiating q(t) with respect to time.
i(t) = \(\frac { dq }{ dt }\) = \(\frac { d }{ dt }\) [Q[m] cos (ωt + φ)] = Q[m] ω sin (ωt + φ) since I[m] = Q[m]ω
i(t) -I[m] sin (ωt + φ) ……. (7)
The equation (7) clearly shows that current varies as a function of time t. In fact, it is a sinusoidally varying alternating current with angular frequency ω.
Angular frequency of LC oscillations:
By differentiating equation (6) twice, we get
\(\frac { { d }^{ 2 }q }{ dt } \) = -Q[m]ω^2 cos (ωt + φ) …….. (8)
Substituting equations (6) and (8) in equation (5),
we obtain L[-Q[m]ω^2 cos (ωt + φ)] + \(\frac { 1 }{ C }\) Q[m] cos (ωt + φ) = 0
Rearranging the terms, the angular frequency of LC oscillations is given by
ω = \(\frac { 1 }{ \sqrt { LC } } \) …… (9)
This equation is the same as that obtained from qualitative analogy.
Samacheer Kalvi 12th Physics Electromagnetic Induction and Alternating Current Numerical Problems
Question 1.
A square coil of side 30 cm with 500 turns is kept in a uniform magnetic field of 0.4 T. The plane of the coil is inclined at an angle of 30° to the field. Calculate the magnetic flux through the
Square coil of side (a) = 30 cm = 30 × 10^-2m
Area of square coil (A) = a^2 = (30 × 10^-2)2 = 9 × 10^-2 m^2
Number of turns (N) = 500
Magnetic field (B) = 0.4 T
Angular between the field and coil (θ) = 90 – 30 = 60°
Magnetic flux (Φ) = NBA cos 0 = 500 × 0.4 × 9 × 10^-2 × cos 60° = 18 × \(\frac { 1 }{ 2 }\)
Φ = 9 W b
Question 2.
A straight metal wire crosses a magnetic field of flux 4 mWb in a time 0.4 s. Find the magnitude of the emf induced in the wire.
Magnetic flux (Φ) = 4 m Wb = 4 × 10^-3 Wb
time (t) = 0.4 s
The magnitude of induced emf (e) = \(\frac { dΦ }{ dt }\) = \(\frac {{ 4 × 10 }^{-3}}{ 0.4 }\) 10^-2
e = 10 mV
Question 3.
The magnetic flux passing through a coil perpendicular to its plane is a function of time and is given by OB = (2t^3 + 4t^2 + 8t + 8) Wb. If the resistance of the coil is 5 Ω, determine the induced
current through the coil at a time t = 3 second.
Magnetic flux (Φ[B]) = (2t^3 + 8t^2 + 8t + 8)Wb
Resistance of the coil (R) = 5 Ω
time (t) = 3 second
Induced current through the coil, I = \(\frac { e }{ R }\)
Induced emf, e = \(\frac {{ dΦ }_{B}}{ dt }\) = \(\frac { d }{ dt }\) ((2t^3 + 4t^2 + 8t + 8) = 6t^2 + 8t + 8
Here time (t) = 3 second
e = 6(3)^2 + 8 × 3 + 8 = 54 + 24 + 8 = 86 V
∴ Induced current through the coil, I = \(\frac { e }{ R }\) = \(\frac { 86 }{ 5 }\) = 17.2 A
Question 4.
A closely wound coil of radius 0.02 m is placed perpendicular to the magnetic field. When the magnetic field is changed from 8000 T to 2000 T in 6 s, an emf of 44 V is induced. Calculate the number
of turns in the coil.
Radius of the coil (r) = 0.02 m
Area of the coil (A) = πr² = 3.14 × (0.02)²= 1.256 × 10^-3 m²
Change in magnetic field, dB = 8000 – 2000 = 6000 T
Time, dt = 6 second
Induced emf, e = 44 V
θ = 0°
Induced emf in the coil, e = NA \(\frac { d }{ dt }\) cos θ . dt
44 = N × 1.256 × 10^-3 × \(\frac { 600 }{ 6 }\) × Cos 0°
Number of turns N = 35 turns
Question 5.
A rectangular coil of area 6 cm^2 having 3500 turns is kept in a uniform magnetic field of 0.4 T, Initially, the plane of the coil is perpendicular to the field and is then rotated through an angle
of 180°. If the resistance of the coil is 35 Ω, find the amount of charge flowing through the coil.
Rectangular coil of their area, A = 6 cm² = 6 x 10^-4 m²
Number of turns N = 3500 turns
Magnetic field, B = 0.4 T
Resistance of the coil, R= 35 Ω
Induced emf (e) = change in flux per second = Φ[2] – Φ[1]
e = NAB cos 180° – NBA cos 0° = -NBA – NBA = – 2 NBA
= – 2 x 3500 x 0.4 x 6 x 10^-4 – 16800 x 10^-4 = – 1.68 V
Current flowing the coil, I = \(\frac { e }{ R }\) = \(\frac { -1.68 }{ 35 }\) = 0.048
Magnitude of the current, I = 48 x 10^-3 A
Amount of charge flowing through the coil, q = It = 48 x 10^-3 x 1 = 48 x 10^-3 C
Question 6.
An induced current of 2.5 mA flows through a single conductor of resistance 100 Ω. Find out the rate at which the magnetic flux is cut by the conductor.
Induced current, I = 2.5 mA
Resistance of conductor, R = 100 Ω
∴ The rate of change of flux, \(\frac {{ dΦ }_{B}}{ dt }\) = e
\(\frac {{ dΦ }_{B}}{ dt }\) = e = IR = 2.5 x 10-3 x 100 = 250 x 10^-3 dt
\(\frac {{ dΦ }_{B}}{ dt }\) = 250 mWb s^-1
Question 7.
A fan of metal blades of length 0.4 m rotates normal to a magnetic field of 4 x 10^-3 T. If the induced emf between the centre and edge of the blade is 0.02 V, determine the rate of rotation of the
Length of the metal blade, l = 0.4 m
Magnetic field, B = 4 x 10^-3 T
Induced emf, e = 0.02 V
Rotational area of the blade, A = πr² = 3.14 x (0.4)² = 0.5024 m²
Induced emf in rotational of the coil, e = NBA ω sin θ
= 9.95222 x 10^-3 x 10^3
= 9.95 revolutions/second
Rate of rotational of the blade, ω = 9.95 revolutions/second
Question 8.
A bicycle wheel with metal spokes of 1 m long rotates in Earth’s magnetic field. The plane of the wheel is perpendicular to the horizontal component of Earth’s field of 4 x 10^-5 T. If the emf
induced across the spokes is 31.4 mV, calculate the rate of revolution of the wheel.
Length of the metal spokes, l = 1 m
Rotational area of the spokes, A = π² = 3.14 x (1)² = 3.14 m²
Horizontal component of Earth’s field, B = 4 x 10^-5 T
Induced emf, e = 31.4 mV
The rate of revolution of wheel,
ω = 250 revolutions / second
Question 9.
Determine the self-inductance of 4000 turn air-core solenoid of length 2m and diameter 0. 04 m.
Length of the air core solenoid, l = 2 m
Diameter, d = 0.04 m
Radius, r = \(\frac { d }{ 2 }\) = 0.02 m
Area of the air core solenoid, A = π^2 = 3.14 x (0.02)^2 = 1.256 x 10^-3 m^2
Number of Turns, N = 4000 turns
Self inductance, L = µ[0]n^2 Al
Question 10.
A coil of 200 turns carries a current of 4 A. If the magnetic flux through the coil is 6 x 10^-5 Wb, find the magnetic energy stored in the medium surrounding the coil.
Number of turns of the coil, N = 200
Current, I = 4 A
Magnetic flux through the coil, Φ = 6 x 10^-5 Wb
Energy stored in the coil, U = \(\frac { 1 }{ 2 }\) LI² = \(\frac { 1 }{ I2}\)
Self inductance of the coil, L = \(\frac { NΦ }{ I }\)
U =\(\frac { 1 }{ 2 }\) \(\frac { NΦ }{ I }\) x I² = \(\frac { 1 }{ 2}\) NΦI = \(\frac { 1 }{ 2}\) x 200 x 6 x 10^-5 x 4
U = 2400 x 10^-5 = 0.024 J (or) joules.
Question 11.
A 50 cm long solenoid has 400 turns per cm. The diameter of the solenoid is 0.04 m. Find the magnetic flux of a turn when it carries a current of 1 A.
Length of the solenoid, l = 50 cm = 50 x 10^-2 m
Number of turns per cm, N = 400
Number of turns in 50 cm, N = 400 x 50 = 20000
Diameter of the solenoid, d = 0.04 m
Radius of the solenoid, r = \(\frac { d }{ 2}\) = 0.02 m
Area of the solenoid, A = π² = 3.14 x (0.02)² = 1.256 x 10^-3 m²
Current passing through the solenoid, I = 1 A
Magnetic fluex,
Question 12.
A coil of 200 turns carries a current of 0.4 A. If the magnetic flux of 4 mWb is linked with the coil, find the inductance of the coil.
Number of turns, N = 200; Current, I = 0.4 A
Magnetic flux linked with coil, Φ = 4 mWb = 4 x 10^-3 Wb
Induction of the coil, L = \(\frac { NΦ }{ I }\) = \(\frac {{ 200 × 4 × 10 }^{-3}}{ 0.4 }\) = \(\frac {{ 800 × 10 }^{-3}}{ 0.4 }\) 2 H
Question 13.
Two air core solenoids have the same length of 80 cm and same cross-sectional area 5 cm². Find the mutual inductance between them if the number of turns in the first coil is 1200 turns and that in
the second coil is 400 turns.
Length of the solenoids, l = 80 cm = 8 x 10^-2 m
Cross sectional area of the solenoid, A = 5 cm^2 = 5 x 10^-4 m^2
Number of turns in the I^st coil, N[1] = 1200
Number of turns in the IInd coil, N[2] = 400
Mutual inductance between the two coils,
Question 14.
A long solenoid having 400 turns per cm carries a current 2A. A 100 turn coil of cross sectional area 4 cm^2 is placed co-axially inside the solenoid so that the coil is in the field produced by the
solenoid. Find the emf induced in the coil if the current through the solenoid reverses its direction in 0.04 sec.
Number of turns of long solenoid per cm =\(\frac { 400 }{{10}^{ -2 }}\); N[2] = 400 x 10^2
Number of turns inside the solenoid, N[2] = 100
Cross-sectional area of the coil, A = 4 cm^2 = 4 x 10^-4 m^2
Current through the solenoid, I = 2A; time, t = 0.04 s
Induced emf of the coil, e = -M \(\frac { dI }{ dt }\)
Mutual inductance of the coil,
Induced emf of the coil,
The current through the solenoid reverse its direction if the induced emf, e = -0.2 V
Question 15.
A 200 turn coil of radius 2 cm is placed co-axially within a long solenoid of 3 cm radius. If the turn density of the solenoid is 90 turns per cm, then calculate mutual inductance of the coil.
Number of turns of the solenoid, N[2] = 200
Radius of the solenoid, r = 2cm = 2 x 10^2 m
Area of the solenoid, A = πr^2= 3.14 x (2 x 10^-2)^2 = 1.256 x 10^-3 m^2
Turn density of long solenoid per cm, N[1] = 90 x 10^2
Mutual inductance of the coil,
= 283956.48 x 10^-8 ⇒ M = 2.84 mH
Question 16.
The solenoids S[1] and S[2] are wound on an iron-core of relative permeability 900. The area of their cross-section and their length are the same and are 4 cm^2 and 0.04 m, respectively. If the
number of turns in S[1] is 200 and that in S[2] is 800, calculate the mutual inductance between the coils. The current in solenoid 1 is increased from 2A to 8A in 0.04 second. Calculate the induced
emf in solenoid 2.
Relative permeability of iron core, μ[r] = 900
Number of turns of solenoid S[1], N[1] = 200
Number of turns of solenoid S[2], N[2] = ‘800
Area of cross section, A = 4 cm^2 = 4 x 10^-4 m^2
Length of the solenoid S[1], l[1] = 0.04 m
current, I =I[2] – I[1] = 8 – 2 = 6A
time taken, t = 0.04 second
emf induced in solenoid S[2] e = -M \(\frac { dI }{ dt }\)
Mutual inductance between the two coils, M = \(\frac{\mu_{0} \mu_{r} N_{1} N_{2} A}{l}\)
M = 180864 x 10^-5 = 1.81 H
Emf induced in solenoid S[2], e = -M\(\frac { dI }{ dt }\) = -1.81 x \(\frac { 6 }{ 0.04 }\)
Magnitude of emf, e = 271.5 V
Question 17.
A step-down transformer connected to main supply of 220 V is made to operate 11 V, 88 W lamp. Calculate (i) Transformation ratio and (ii) Current in the primary.
Voltage in primary coil, V[p] = 220 V
Voltage in secondary coil, V[s] = 11 V
Output power = 88 W
(i) To find transformation ratio, k = \(\frac {{ V }_{ s }}{{ V }_{ p }}\) = \(\frac { 11 }{ 220 }\) = \(\frac { 1 }{ 20 }\)
(ii) Current in primary, I[p] = \(\frac {{ V }_{ s }}{{ V }_{ p }}\) x I[s]
So, I[s] = ?
Outputpower = V[s] I[s]
⇒ 88 = 11 x I[s]
I[s] = \(\frac { 88 }{ 11 }\) = 8A
Therefore, I[p] = \(\frac {{ V }_{ s }}{{ V }_{ p }}\) x I[s] = \(\frac { 11 }{ 220 }\) x 8 = 0.4 A
Question 18.
A 200V/120V step-down transformer of 90% efficiency is connected to an induction stove of resistance 40 Ω. Find the current drawn by the primary of the transformer.
Primary voltage, V[p] = 200 V
Secondary voltage, V[s] = 120 V
Efficiency, η = 90%
Secondary resistance, R[s] = 40 Ω
Current drawn by the primary of the transformc, I[p] = \(\frac {{ V }_{ s }}{{ R }_{ s }}\) x I[s]
Output power = V[s] I[s]
Question 19.
The 300 turn primary of a transformer has resistance 0.82 Ω and the resistance of its secondary of 1200 turns is 6.2 Ω. Find the voltage across the primary if the power output from the secondary at
1600V is 32 kW. Calculate the power losses in both coils when the transformer efficiency is 80%.
Efficiency, η = 80% = \(\frac { 80 }{ 100 }\)
Number of turns in primary, N[p] = 300
Number of turns in secondary, N[s] = 1200
Resistance in primary, R[p] = 0.82 Ω
Resistance in secondary, R[s] = 6.2 Ω
Secondary voltage, V[s] = 1600 V
Output power = 32 kW
Output power = V[s] I[s]
Power loss in primary = \({ { I }_{ p }^{ 2 }{ R }_{ p } }\) = (100)² x 0.82 = 8200 = 8.2 kW
Power loss in secondary = \({ { I }_{ s }^{ 2 }{ R }_{ s } }\) = (20)² x 6.2 = 2480 = 2.48 kW
Question 20.
Calculate the instantaneous value at 60°, average value and RMS value of an alternating current whose peak value is 20 A.
Peak value of current, I[m] = 20 A
Angle, θ = 60° [θ = ωt]
(i) Instantaneous value of current,
i = I[m] sin ωt = I[m] sin θ
= 20 sin 60° = 20 x \(\frac { √3 }{ 2 }\) = 10√3 = 10 x 1.732
i = 17.32 A
(ii) Average value of current,
I[av] = \(\frac {{ 2I }_{m}}{ π }\) = \(\frac { 2 × 20 }{ 3.14 }\)
I[av] = 12.74 A
(iii) RMS value of current,
I[RMS] = 0.707 I[m]
or \(\frac{\mathrm{I}_{\mathrm{m}}}{\sqrt{2}}\) = 0.707 x 20
I[RMS] = 14. 14 A
Samacheer Kalvi 12th Physics Electromagnetic Induction and Alternating Current Conceptual Questions
Question 1.
A graph between the magnitude of the magnetic flux linked with a closed loop and time is given in the figure. Arrange the regions of the graph in ascending order of the magnitude of induced emf in
the loop.
According to electromagnetic induction, induced emf,
e = \(\frac { dΦ }{ dt }\)
Ascending order of induced emf from the graphical representation is b < c < d < a.
Question 2.
Using Lenz’s law, predict the direction of induced current in conducting rings 1 and 2 when current in the wire is steadily decreasing.
According to Lenz’s law, a current will be induced in the coil which will produce a flux in the opposite direction.
If the current decreases in the wire, the induced current flows in ring 1 in clockwise direction, the induced current flows in ring 2 in anti-clockwise direction.
Question 3.
A flexible metallic loop abed in the shape of a square is kept in a magnetic field with its plane perpendicular to the field. The magnetic field is directed into the paper normally. Find the
direction of the induced current when the square loop is crushed into an irregular shape as shown in the figure.
The magnetic flux linked with the wire decreases due to decrease in area of the loop. The induced emf will cause current to flow in the direction. So that the wire is pulled out ward direction from
all sides. According to Fleming’s left hand rule, force on wire will act outward i direction from all sides.
Question 4.
Predict the polarity of the capacitor in a closed circular loop when two bar magnets are moved as shown in the figure.
When magnet 1 is moved with its South pole towards to the coil, emf is induced in the coil as the magnetic flux through the coil changes. When seeing from the left hand side the direction of induced
current appears to be clockwise. When seeing from the right hand side the direction of induced current appears to be anti-clockwise. In capacitor, plate A has positive polarity and plate B has
negative polarity.
Question 5.
In series LC circuit, the voltages across L and C are 180° out of phase. Is it correct? Explain.
In series LC circuit, the voltage across the capacitance lag the current by 90° while the voltage across the inductance lead the current by 90°. This makes the inductance and capacitance voltages
180° out of phase.
Question 6.
When does power factor of a series RLC circuit become maximum?
For a series LCR circuit, power factor is
For purely resistive, Φ = 0°, cos 0° = 1
Thus the power factor assumes the maximum value for a purely resistive circuit.
Question 7.
Draw graphs showing the distribution of charge in a capacitor and current through an inductor during LC oscillations with respect to time. Assume that the charge in the capacitor is maximum
For a capacitor, the graph between charge and time.
The charge decays exponentially decreases with time.
Samacheer Kalvi 12th Physics Electromagnetic Induction and Alternating Current Additional Questions solved
I Choose The Correct Answer
Question 1.
An emf can be induced by _______.
a) Change in a magnetic field
b) Change in area of cross-section
c) Change in angle
d) Change in the magnetic field, area and angle
d) Change in the magnetic field area and angle
Question 2.
A rectangular coil of 100 turns and size 0.1 m x 0.05 m is placed perpendicular to a magnetic field of 0.1 T. If the field drops to 0.05 T in 0.05 s, the magnitude of the emf induced in the coil is-
(a) 0.5 V
(b) 0.75 V
(c) 1.0 V
(d) 1.5 V
(a) 0.5 V
ε = 0.5 V
Question 3.
The formula of induced emf is ________
a) emf = B^2l
b) emf = Bil
c) emf = Blv
d) emf = B^2v
c) emf = Blv
Question 4.
A coil of area 10 cm^2, 10 ms^-1 turns and resistance 20 Ω is placed in a magnetic field directed perpendicular to the plane of the coil and changing at the rate of 10^8 gauss/second. The induced
current in the coil will be-
(a) 5 A
(b) 0.5 A
(c) 0.05 A
(d) 50 A
(a) 5 A
Question 5.
Find the strength of the magnetic field in a conductor 0.5m long moving with a velocity of 10 m/s, including an emf of 20V
a) 1 T
b) 2 T
c) 3 T
d) 4 T
d) 4 T
emf = Blv;
20 = B × 0.5 × 10 = 4T
Question 6.
Eddy currents are produced in a material when it is-
(a) heated
(b) placed in a time-varying magnetic field
(c) placed in an electric field
(d) placed a uniform magnetic field
(b) placed in a time-varying magnetic field
Question 7.
An emf of 5 V is induced in an inductance when the current in it changes at a steady rate from 3 A to 2 A in 1 millisecond. The value of inductance is-
(a) 5 mH
(b) 5 H
(c) 5000 H
(d) zero
(a) 5 mH
Question 8.
Faraday’s law of electromagnetic induction is related to the-
(a) Law of conservation of charge
(b) Law of conservation of energy
(c) Third law of motion
(d) Law of conservation of angular momentum
(b) Law of conservation of energy
Question 9.
The inductance of a coil is proportional to-
(a) its length
(b) the number of turns
(c) the resistance of the coil
(d) square of the number of turns
(d) square of the number of turns
Question 10.
Ac cannot be used for _______
a) producing heat
b) producing light
c) magnetizing and electroplating
d) all the above
c) Magnetizing and electroplating
Question 11.
A coil of area 80 cm^2 and 50 turns is rotating with 2000 revolutions per minute about an axis perpendicular to a magnetic field of 0.05 T. The maximum value of the emf developed in it is-
(a) 2000 πV
(b) \(\frac { 10π }{ 3 }\) V
(c) \(\frac { 4π }{ 3 }\)V
(d) \(\frac { 2 }{ 3 }\) V
(c) \(\frac { 4π }{ 3 }\)V
ε = NBA ω = 50 x 0.05 x 80 x 10^-4 x \(\frac { 2π × 2000 }{ 60 }\) = \(\frac { 4π }{ 3 }\)V
Question 12.
The direction of induced current during electromagnetic induction is given by-
(a) Faraday’s law
(b) Lenz’s law
(c) Maxwell’s law
(d) Ampere’s law
(b) Lenz’s law
Question 13.
An 800-turn coil of effective area 0.05 m^2 is kept perpendicular to a magnetic field 5 × 10^-5 T. When the plane of the coil is rotated by 90° around any of its coplanar axis in 0.1 s, the emf
induced in the coil will be:
a) 2 V
b) 0.2 V
c) 0.002 V
d) 0.02 V
d) 0.02 V
Question 14.
In a step-down transformer the input voltage is 22 kV and the output voltage is 550 V. The ratio of the number of turns in the secondary to that in the primary is-
(a) 1 : 20
(b) 20 : 1
(c) 1 : 40
(d) 40 : 1
(c) 1 : 40
\(\frac {{ N }_{ s }}{{ N }_{ p }}\) = \(\frac {{ V }_{ s }}{{ V }_{ p }}\) = \(\frac { 550 }{ 22000 }\) = \(\frac { 1 }{ 40 }\)
Question 15.
The self-inductance of a coil is 5 H. A current of 1 A changes to 2 A within 5 s through the coil. The value of induced emf will be-
(a) 10 V
(b) 0.1 V
(c) 1.0 V
(d) 100 V
(c) 1.0 V
Question 16.
The low-loss transformer has 230 V applied to the primary and gives 4.6 V in the secondary. The secondary is connected to a load which draws 5 amperes of current. The current (in amperes) in the
primary is-
(a) 0.1 A
(b) 1.0 A
(c) 10 A
(d) 250 A
(a) 0.1 A
I[p] = \(\frac {{ V }_{ s }{ I }_{ s }}{{ V }_{ p }}\) = \(\frac { 4.6 × 5 }{ 230 }\) = 0.1A
Question 17.
A coil is wound on a frame of rectangular cross-section. If all the linear dimensions of the frame are increased by a factor 2 and the number of turns per unit length of the coil remains the same.
Self-inductance of the coil increases by a factor of-
(a) 4
(b) 8
(c) 12
(d) 16
(b) 8
If all the linear dimensions are doubled, the cross-sectional area becomes eight times. Therefore, the flux produced by a given current will become eight times. Hence, the self-inductance increases
by a factor of 8.
Question 18.
The current passing through a choke coil of 5 henries is decreasing at the rate of 2 amp/sec. The emf developed across the coil is_______
a) 10 V
b) -10 V
c) 2.5 V
d) -2.5 V
a) 10 V
E = -L \(\frac{\mathrm{dI}}{\mathrm{dt}}\) = -5 (-2) = 10 V
Question 19.
A magnetic field 2 x 10^-2 T acts at right angles to a coil of area 100 cm^2 with 50 turns. The average emf induced in the coil will be 0.1 V if it is removed from the field in time.
(a) 0.01 s
(b) 0.1 s
(c) 1 s
(d) 10 s
(b) 0.1 s
Question 20.
Number of turns in a coil is increased from 10 to 100. Its inductance becomes-
(a) 10 times
(b) 100 times
(c) 1/10 times
(d) 25 times
(a) 10 times
Question 21.
The north pole of a magnet is brought near a metallic ring as shown in the figure. The direction of induced current in the ring, as seen by the magnet is-
(a) anti-clockwise
(b) first anti-clockwise and then clockwise
(c) clockwise
(d) first clockwise and then anti-clockwise
(a) anti-clockwise
Question 22.
If the angular speed of rotation of an armature of the alternating current generator is doubled, then the induced electromotive force will be
a) Twice
b) Four times
c) No change
d) Half
a) Twice
Question 23.
The core of a transformer is laminated to reduce.
(a) Copper loss
(b) Magnetic loss
(c) Eddy current loss
(d) Hysteresis loss
(c) Eddy current loss
Question 24.
Which of the following has the dimension of time?
(a) LC
(b) \(\frac { R }{ L }\)
(c) \(\frac { L }{ R }\)
(d) \(\frac { C }{ L }\)
(c) \(\frac { L }{ R }\)
Question 25.
Which statement is correct?
a) Transformer obtain a suitable DC voltage
b) Transformer converts DC into AC
c) Transformer obtain a suitable AC voltage
d) Transformer convert AC into DC
c) Transformer obtain a suitable AC voltage
Question 26.
Alternating current can be measured by
(a) moving coil galvanometer
(b) hotwire ammeter
(c) tangent galvanometer
(d) none of the above
(b) hotwire ammeter
Question 27.
In an LCR circuit, the energy is dissipated in-
(a) R only
(b) R and L only
(c) R and C only
(d) R, L and C
(a) R only
Question 28.
A 40 Ω electric heater is connected to 200 V, 50 Hz main supply. The peak value of the electric current flowing in the circuit is approximately-
(a) 2.5 A
(b) 5 A
(c) 7 A
(d) 10 A
(c) 7 A
I[0] = \(\frac {{ V }_{0}}{ R }\) = \(\frac{200 \sqrt{2}}{40}\) 5 √2 ≈ 7A
Question 29.
The secondary transformer gives 200 V when 2 kW power is supplied to its 500 turns primary at 0.5 amperes. The number of turns in the secondary is
a) 25
b) 30
c) 35
d) 40
a) 25
I[P] V[P] = 2000
V[P] = \(\frac{2000}{0.5}\) = 4000
\(\frac{V_{P}}{V_{S}}=\frac{N_{P}}{N_{S}}=\frac{4000}{200}=\frac{500}{N_{S}}\) = 25
Question 30.
An inductance, a capacitance and a resistance are connected in series across a source of alternating voltages. At resonance, the applied voltage and the current flowing through the circuit will have
a phase difference of-
(a) \(\frac { π }{ 4 }\)
(b) zero
(c) π
(d) \(\frac { π }{ 2 }\)
(b) zero
Question 31.
In an AC circuit, the rms value of the current I[rms], is related to the peak current I[0 ]as-
(a) I[rms] = \(\frac {{I}_{0}}{ π }\)
(b) I[rms] = \(\frac {{I}_{0}}{ √2 }\)
(c) I[rms] = √2 I[0]
(d) I[rms] = πI[0]
(b) I[rms] = \(\frac {{I}_{0}}{ √2 }\)
Question 32.
If capacitance is \(\frac{10^{2}}{\pi}\) µF connected across 220V, 500Hz A.C. calculate the capacitive reactance.
a) 1000Ω
b) 100Ω
c) 10Ω
d) 0Ω
b) 100Ω
χ[C] = \(\frac{1}{2 \pi f \mathrm{C}}\)
= \(\frac{1}{2 \times \pi \times 50 \times \frac{10^{-4}}{\pi}}\)
= 100Ω
Question 33.
The reactance of a capacitance at 50 Hz is 5 Ω. Its reactance at 100 Hz will be-
(a) 5 Ω
(b) 10 Ω
(c) 20 Ω
(d) 2.5 Ω
(d) 2.5 Ω.
Question 34.
In a LCR AC circuit off-resonance, the current-
(a) is always in phase with the voltage
(b) always lags behind the voltage
(c) always leads the voltage
(d) may lead or lag behind the voltage
(d) may lead or lag behind the voltage
Question 35.
The average power dissipation in a pure inductance L, through which a current I[0 ]sin ωt is flowing is-
(a) \(\frac { 1 }{ 2 }\) L\({ I }_{ 0 }^{ 2 }\)
(b) L\({ I }_{ 0 }^{ 2 }\)
(c) 2 L\({ I }_{ 0 }^{ 2 }\)
(d) zero
(d) zero
Question 36.
The power in an AC circuit is given by P = V[rms] I[rms] cos Φ. The value of the power factor cos Φ in series LCR circuit at resonance is-
(a) zero
(b) 1
(c) \(\frac { 1 }{ 2 }\)
(d) \(\frac { 1 }{ √2 }\)
(b) 1
Question 37.
The induced current depends upon the speed with which the conductor moves and
a) Resistance of G
b) Voltage of Loop
c) Current of loop
d) resistance of the loop
c) Current of loop
Question 38.
In an AC circuit containing only capacitance, the current-
(a) leads the voltage by 180°
(b) remains in phase with the voltage
(c) leads the voltage by 90°
(d) lags the voltage by 90°
(c) leads the voltage by 90°
Question 39.
In the alternating current circuit
a) Average value of current is zero
b) Average value of emf is zero
c) power dissipation is zero
d) phase difference is zero
a) Average value of current is zero
Question 40.
What is the value of inductance L for which the current is maximum in a series LCR circuit with C = 10 μF and ω = 1000 s^-1?
(a) 1 mH
(b) 10 mH
(c) 100 mH
(d) Cannot be calculated unless R is known
(c) 100 mH
L = \(\frac { 1 }{ { \omega }^{ 2 }C } \) = \(\frac{1}{(1000)^{2} \times 10 \times 10^{-6}}\) = 0.1 H = 100 mH
II Fill in the Blanks
Question 1.
Electromagnetic induction is used in …………….
transformer and AC generator
Question 2.
Lenz’s Law is in accordance with the law of …………….
conservation of energy.
Question 3.
The self-inductance of a straight conductor is …………….
Question 4.
Transformer works on …………….
AC only
Question 5.
The power loss is less in transmission lines when …………….
voltage is more but current is less
Question 6.
The law that gives the direction of the induced current produced in a circuit is …………….
Lenz’s law
Question 7.
Fleming’s right-hand rule is otherwise called …………….
generator rule
Question 8.
Unit of self-inductance is …………….
Question 9.
The mutual induction is very large if the two coils are wound on …………….
soft iron core
Question 10.
When the coil is in a vertical position, the angle between the normal to the plane of the coil and magnetic field is …………….
Question 11.
The emf induced by changing the orientation of the coil is ……………. in nature.
Question 12.
In a three-phase AC generator, the three coils are inclined at an angle of …………….
Question 13.
The emf induced in each of the coils differ in phase by …………….
Question 14.
A device which converts the high alternating voltage into low alternating voltage and vice versa are…………….
Question 15.
For an ideal transformer efficiency η is …………….
Question 16.
The alternating emf induced in the coil varies …………….
periodically in both magnitude and direction
Question 17.
For direct current, inductive reactance is …………….
Question 18.
In an inductive circuit, the average power of sinusoidal quantity of double the frequency over a complete cycle is …………….
Question 19.
For direct current, the resistance offered by a capacitor is …………….
Question 20.
In a capacitive circuit, power over a complete cycle is …………….
Question 21.
Q-factor measures the …………….in the resonant circuit
Question 22.
Voltage drop across inductor and capacitor differ in phase by …………….
Question 23.
Angular resonant frequency (co) is …………….
\(\frac { 1 }{ \sqrt { LC } } \)
Question 24.
A circuit will have flat resonance if its Q-value is …………….
Question 25.
The average power consumed by the choke coil over a complete cycle is …………….
III Match the following
Question 1.
(i) → (c)
(ii) → (a)
(iii) → (d)
(iv) → (b)
Question 2.
(i) → (c)
(ii) → (a)
(iii) → (d)
(iv) → (b)
Question 3.
(i) → (c)
(ii) → (d)
(iii) → (b)
(iv) → (a)
Question 4.
Type of impedance phase between voltage and current
(i) → (c)
(ii) → (d)
(iii) → (a)
(iv) → (b)
Question 5.
Energy in two oscillatory systems: (LC oscillator and spring-mass system)
(i) → (c)
(ii) → (d)
(iii) → (a)
(iv) → (b)
IV Assertion and reason
(a) If both assertion and reason are true and the reason is the correct explanation of the assertion.
(b) If both assertion and reason are true but reason is not the correct explanation of the assertion.
(c) If assertion is true but the reason is false.
(d) If the assertion and reason both are false.
(e) If assertion is false but the reason is true.
Question 1.
Assertion: Eddy currents is produced in any metallic conductor when the flux is changed around it.
Reason: Electric potential determines the flow of charge.
(b) If both assertion and reason are true but reason is not the correct explanation of the assertion.
When a metallic conductor is moved in a magnetic field, magnetic flux is “varied. It disturbs the free electrons of the metal and set up an induced emf in it. As there are no free ends of the metal
i.e., it will be closed in itself so there will be induced current.
Question 2.
Assertion: Faraday’s laws are consequences of conservation of energy.
Reason: In a purely resistive AC circuit, the current lags behind the emf in phase.
(c) If assertion is true but the reason is false.
According to Faraday’s law, the conversion of mechanical energy into electrical energy is in accordance with the law of conservation of energy. It is also clearly known that in pure resistance, the
emf is in phase with the current.
Question 3.
Assertion: The inductance coil is made of copper.
Reason: Induced current is more in wire having less resistance.
(a) If both assertion and reason are true and the reason is the correct explanation of the assertion.
Inductance coils made of copper will have very small ohmic resistance.
Question 4.
Assertion: An aircraft flies along the meridian, the potential at the ends of its wings will be the same.
Reason: Whenever there is a change in the magnetic flux, an emf is induced.
(e) If assertion is false but the reason is true.
As the aircraft flies magnetic flux change through its wings due to the vertical component of the Earth’s magnetic field. Due to this, induced emf is produced across the wings of the aircraft.
Therefore, the wings of the aircraft will not be at the same potential.
Question 5.
Assertion: In series, LCR circuit resonance can take place.
Reason: Resonance takes place if inductance and capacitive reactances are equal and opposite.
(a) If both assertion and reason are hue and the reason is the correct explanation of the assertion.
At resonant frequency X[L] = X[C]
So, Impedance, Z = R (minimum)
Therefore, the current in the circuit is maximum.
Samacheer Kalvi 12th Physics Electromagnetic Induction and Alternating Current Short Answer Questions
Question 1.
Define magnetic flux (Φ[B]).
The magnetic flux through an area A in a magnetic field is defined as the number of magnetic field lines passing through that area normally and is given by the equation,
Question 2.
What are the applications of eddy current?
1. Induction stove
2. Electromagnetic damping
3. Eddy current brake
4. Eddy current testing
Question 3.
Define the unit of self-inductance.
The unit of self-inductance is henry. One henry is defined as the self-inductance of a coil in which a change in current of one ampere per second produces an opposing emf of one volt.
Question 4.
Define mutual inductance in terms of flux and current.
The mutual inductance M[21] is defined as the flux linkage of the coil 2 when 1A current flows through coil 1.
M[21] = \(\frac{\mathrm{N}_{2} \mathrm{\phi}_{21}}{i_{1}}\)
Question 5.
Define self-inductance in terms of induced emf.
The inductance of a coil is also defined as the opposing emf induced in the coil when the rate of change of current through the coil is 1 AS^-1
L = \(\frac{-\varepsilon}{d i / d t}\)
Question 6.
List out the advantages of the three-phase alternator.
Three-phase system has many advantages over single-phase system, which is as follows:
• For a given dimension of the generator, three-phase machine produces higher power output than a single-phase machine.
• For the same capacity, three-phase alternator is smaller in size when compared to a single-phase alternator.
• A three-phase transmission system is cheaper. A relatively thinner wire is sufficient for the transmission of three-phase power.
Question 7.
Mentions the differences between a step up and step down transformer.
Question 8.
Define efficiency of transformer.
The efficiency p of a transformer is defined as the ratio of the useful output power to the input power. Thus
Transformers are highly efficient devices having their efficiency in the range of 96 – 99%. Various energy losses in a transformer will not allow them to be 100% efficient.
Question 9.
When the capacitive circuit offers infinite resistance?
The capactive reactance
χ[C] = \(\frac{1}{\omega C}\)
For steady current f = 0, χ[C] = \(\frac{1}{\omega C}\) = \(\frac{1}{2 \pi f C}=\frac{1}{0}\)
Thus the circuit offers infinite resistance to eddy current.
Question 10.
An inductor blocks AC but it allows DC. Why? and How?
An inductor L is a closely wound helical coil. The steady DC current flowing through L produces a uniform magnetic field around it and the magnetic flux linked remains constant. Therefore there is no
self-induction and self-induced emf (back emf). Since the inductor behaves like a resistor, DC flows through an inductor.
The AC flowing through L produces a time-varying magnetic field which in turn induces self-induced emf (back emf). This back emf, according to Lenz’s law, opposes any change in the current. Since AC
varies both in magnitude and direction, its flow is opposed in L. For an ideal inductor of zero ohmic resistance, the back emf is equal and opposite to the applied emf. Therefore L blocks AC.
Question 11.
A capacitor blocks DC but allows AC. Explain.
Capacitive reactance, X[C] = \(\frac { 1 }{ ωC }\) = \(\frac { 1 }{ 2πƒc }\)
where, ƒ = 0, X[C] = ∞
where, ƒ is the frequency of the ac supply. In a dc circuit ƒ = 0. Hence the capacitive reactance has infinite value for dc and a finite value for ac. In other words, a capacitor serves as a block
for dc and offers an easy path to ac.
Question 12.
Why dc ammeter cannot read ac?
A dc ammeter cannot read ac because the average value of ac is zero over a complete cycle.
Question 13.
Name the power loss that occurs during long-range power transmission and give its formula. Explain briefly how this power loss can be overcome.
1. DC power gets lost for long-range power transmission.
2. It doesn’t have any particular formula.
3. It can be overcome by using AC current, for long-range transmission.
Question 14.
What is meant by ‘Wattful current’?
The component of current (I[rms] cos Φ) which is in phase with the voltage is called the active component. The power consumed by this current = V[rms] I[rms] cos Φ. So that it is also known as
‘Wattful’ current.
Samacheer Kalvi 12th Physics Electromagnetic Induction and Alternating Current Long Answer Questions
Question 1.
Derive an expression for Mutual Inductance between two long co-axial solenoids.
Mutual inductance between two long co-axial solenoids:
Consider two long co-axial solenoids of same length 1. The length of these solenoids is large when compared to their radii so that the magnetic field produced inside the solenoids is uniform and the
fringing effect at the ends may be ignored. Let A[1] and A[2] be the area of cross section of the solenoids with A[1] being greater than A[2]. The turn density of these solenoids are n[1] and n[2]
Let i[1] be the current flowing through solenoid 1, then the magnetic field produced inside it is
B[1] = μ[0]n[1]i[1].
As the field lines of \(\vec {{ B }_{1}} \) are passing through the area bounded by solenoid 2, the magnetic flux is linked with each turn of solenoid 2 due to solenoid 1 and is given by
since θ = 0°
The flux linkage of solenoid 2 with total turns N[2] is
N[2]Φ[21] = (n[2]l)(μ[0] n[1] i[1])
since N[2] = n[2]l
N[2]Φ[21] = (μ[0] n[1] n[2] A[2]l)i[1] ….. (1)
From equation of mutual induction
N[2]Φ[21] = M[21] i[1] …… (2)
Comparing the equations (1) and (2),
M[21] = μ[0] n[1] n[2] A[2]l ….. (3)
This gives the expression for mutual inductance M[21] of the solenoid 2 with respect to solenoid 1. Similarly, we can find mutual inductance M[21] of solenoid 1 with respect to solenoid 2 as given
The magnetic field produced by the solenoid 2 when carrying a current i[2] is
B[2] = μ[0] n[2] i[2]
This magnetic field B[2] is uniform inside the solenoid 2 but outside solenoid 2, it is almost zero. Therefore for solenoid 1, the area A[2] is the effective area over which the magnetic field B[2]
is present; not area A[2] Then the magnetic flux Φ[12] linked with each turn of solenoid 1 due to solenoid 2 is
The flux linkage of solenoid 1 with total turns N[1] is
[Since N[1] = n[1]l]
[Since N[1] Φ[12] = M[12]i[2]]
N[1] Φ[12] = (n[1]l) (μ[0] n[2] i[2]) A[2]
N[1] Φ[12] = (μ[0] n[1] n[2] A[2]l) i[2]
M[12]i[2] = (μ[0] n[1] n[2] A[2]l) i[2]
Therefore, we get
∴ M[12] = μ[0] n[1] n[2] A[2]l ……. (4)
From equation (3) and (4), we can write
M[12] = M[21] = M ……. (5)
In general, the mutual inductance between two long co-axial solenoids is given by
M= μ[0] n[1] n[2] A[2]l ……. (6)
If a dielectric medium of relative permeability’ pr is present inside the solenoids, then
M = μn[1] n[2] A[2]l
or M = μ[0] μ[r] n[1] n[2] A[2]l
Question 2.
How will you define the unit of mutual-inductance?
Unit of mutual inductance:
The unit of mutual inductance is also henry (H).
If i[A]= 1 A and N[2] Φ[21] = 1 Wb turns, then M[21] = 1 H.
Therefore, the mutual inductance between two coils is said to be one henry if a current of 1A in coil 1 produces unit flux linkage in coil 2.
If \(\frac {{ di }_{1}}{ 2 }\) = 1 As^-1 and ε[2] = -1V, theen M[21] = 1H.
Therefore, the mutual inductance between two coils is one henry if a current changing at the rate of lAs-1 in coil 1 induces an opposing emf of IV in coil 2.
Question 3.
Find out the phase relationship between voltage and current in a pure resister circuit.
AC circuit containing pure resistor:
Consider a circuit containing a pure resistor of resistance R connected across an alternating voltage source. The instantaneous value of the alternating voltage is given by
υ = V[m] sin ωt ….. (1)
An alternating current i flowing in the circuit due to this voltage develops a potential drop across R and is given by
V[R] = iR ……. (2)
Kirchoff’s loop rule states that the algebraic sum of potential differences in a closed circuit is zero. For this resistive circuit,
υ – V[R] = 0
From equation (1) and (2),
V[m] sin ωt = iR
⇒ i = \(\frac {{ V }_{m}}{ R }\) sin ωt
i = I[m] sin ωt …… (3)
where V\(\frac {{ V }_{m}}{ R }\) = I[m] the peak value of alternating current in the circuit. From equations (1) and (3), it is clear that the applied voltage and the currents are in phase with each
other in a resistive circuit. It means that they reach their maxima and minima simultaneously. This is indicated in the phasor diagram. The wave diagram also depicts that the current is in phase with
the applied voltage.
Question 4.
Find out the phase relationship between voltage and current in a pure capacitor circuit.
AC circuit containing only a capacitor:
Consider a circuit containing a capacitor of capacitance C connected across an alternating voltage source.
The alternating voltage is given by
V[m] sin ωt …… (1)
Let q be the instantaneous charge on the capacitor. The emf across the capacitor at that instant is \(\frac { q }{ C }\). According to Kirchoff’s loop rule,.
υ = \(\frac { q }{ C }\) = 0
⇒ CV[m] sin ωt
By the definition of current,
i = \(\frac { dq }{ dt }\) = \(\frac { d }{ dt }\) CV[m] \(\frac { d }{ dt }\) (sin ωt)
= CV[m] sin ωt
or i = \(\frac{\frac{\mathrm{V}_{m}}{1}}{\frac{\mathrm{l}}{\mathrm{C} \omega}}\) sin \(\left(\omega t+\frac{\pi}{2}\right)\)
i = I[m] sin \(\left(\omega t+\frac{\pi}{2}\right)\) ….. (2)
where \(\frac{\frac{\mathrm{V}_{m}}{1}}{\frac{\mathrm{l}}{\mathrm{C} \omega}}\) = I[m], the peak value of the alternating current. From equation (1) and (2), it is clear that current leads the
applied voltage by π/2 in a capacitive circuit. The wave diagram for a capacitive circuit also shows that the current leads the applied voltage by 90°.
Question 5.
What is LC oscillation? and explain the generation of LC oscillation.
Whenever energy is given to a circuit containing a pure inductor of inductance L and a capacitor of capacitance C, the energy oscillates back and forth between the magnetic field of the inductor and
the electric field of the capacitor. Thus the electrical oscillations of definite frequency are generated. These oscillations are called LC oscillations.
Generation of LC oscillations:
Let us assume that the capacitor is fully charged with maximum charge Q[m] at the initial stage. So that the energy stored in the capacitor is maximum and is given by U[Em] = \(\frac{\mathrm{Q}_{\
mathrm{m}}^{2}}{2 \mathrm{C}}\) As there is no current in the inductor, the energy stored in it is zero i.e., UB = 0. Therefore, the total energy is wholly electrical.
The capacitor now begins to discharge through the inductor that establishes current i in a clockwise direction. This current produces a magnetic field around the inductor and the energy stored in the
inductor is given by U[B] = \(\frac {{ Li }_{ 2 }}{ 2 }\). As the charge in the capacitor decreases, the energy stored in it also decreases and is given by U[E] = \(\frac {{ q }_{ 2 }}{ 2C }\). Thus
there is a transfer of some part of the energy from the capacitor to the inductor. At that instant, the total energy is the sum of electrical and magnetic energies.
When the charges in the capacitor are exhausted, its energy becomes zero i.e., UE = 0. The energy is fully transferred to the magnetic field of the inductor and its energy is maximum. This maximum
energy is given by UB = \(\frac{\mathrm{LI}_{\mathrm{m}}^{2}}{2}\) where I[m] is the maximum current flowing in the circuit. The total energy is wholly magnetic.
Even though the charge in the capacitor is zero, the current will continue to flow in the same direction because the inductor will not allow it to stop immediately. The current is made to flow with
decreasing magnitude by the collapsing magnetic field of the inductor. As a result of this, the capacitor begins to charge in the opposite direction. A part of the energy is transferred from the
inductor back to the capacitor. The total energy is the sum of the electrical and magnetic energies.
When the current in the circuit reduces to zero, the capacitor becomes frilly charged in the opposite direction. The energy stored in the capacitor becomes maximum. Since the current is zero, the
energy stored in the inductor is zero. The total energy is wholly electrical. The state of the circuit is similar to the initial state but the difference is that the capacitor is charged in opposite
direction. The capacitor then starts to discharge through the inductor with an anti-clockwise current. The total energy is the sum of the electrical and magnetic energies.
As already explained, the processes are repeated in opposite direction. Finally, the circuit returns to the initial state. Thus, when the circuit goes through these stages, an alternating current
flows in the circuit. As this process is repeated again and again, the electrical oscillations of definite frequency are generated. These are known as LC oscillations. In the ideal LC circuit, there
is no loss of energy. Therefore, the oscillations will continue indefinitely. Such oscillations are called undamped oscillations.
Samacheer Kalvi 12th Physics Electromagnetic Induction and Alternating Current Numerical Problems
Question 1.
A coil has 2000 turns and an area of 70 cm^2. The magnetic field perpendicular to the plane of the coil is 0.3 Wb/m^2. The coil takes 0.1 s to rotate through 180°. Then what is the value of induced
Magnitude of change in flux,
|∆Φ | = |NBA (cos 180° – cos 0°
= |NBA(-1 – 1)| = |-2 NBA| = |2 NBA|
N = 2000
B = 0.3 Wb/m^2
A = 70 x 10^-4 m^2
t = 0.1 sec
Induced emf, ε = \(\frac { \left| \Delta \phi \right| }{ \Delta t } \) = \(\frac { 2NBA }{ ∆t }\) = \(\frac {{ 2 × 2000 × 0.3 × 70 × 10 }^{-4}}{ 0.1 }\)
ε = 84 V
Question 2.
A rectangular loop of sides 8 cm and 2 cm is lying in a uniform magnetic field of magnitude 0.5 T with its plane normal to the field. The field is now gradually reduced at the rate of 0.02 T/s. If
the resistance of the loop is 1.6 Ω, then find the power dissipated by the loop as heat.
Induced emf, |ε| = \(\frac { dΦ }{ dt } \) = A \(\frac { dB }{ dt } \) = 8 × 2 × 10^-4 × 0.02
ε = 3.2 × 10^-5 V
Induced current, I = \(\frac { ε }{ R } \) = 2 × 10^-5 A
Power loss = I^2R = 4 × 10^-10 × 1.6 = 6.4 × 10^-10 W
Question 3.
A current of 2 A flowing through a coil of 100 turns gives rise to a magnetic flux of 5 x 10^-5 Wb per turn. What is the magnetic energy associated with the coil?
Self inductance of coil, L = \(\frac { NΦ }{ I } \) = \(\frac {{ 100 × 5 × 10 }^{-3}}{ 2 } \)
= 2.5 × 10^-3 H
Magnetic energy associated with inductance,
U = \(\frac { 1 }{ 2 }\) LI^2 = \(\frac { 1 }{ 2 }\) × 2.5 × 10^-3 × (2)^2
= \(\frac { 1 }{ 2 }\) × 2.5 × 10^-3 × 4 = 5 × 10^-3 J
Question 4.
A transformer is used to light a 140 W, 24 V bulb from a 240 V AC mains. The current in the main cable is 0.7 A. Find the efficiency of the transformer.
η = \(\frac { 140 }{ 240 × 0.7 }\) × 100 = 83.3%
Question 5.
In an ideal step-up transformer, the turn ratio is 1 : 10. The resistance of 200 ohms connected across the secondary is drawing a current of 0.5 A. What are the primary voltage and current?
Primary current, I[p] = 5 A
Primary voltage, E[p] = 10 V
Question 6.
A capacitor of capacitance 2 μF is connected in a tank circuit oscillating with a frequency of 1 kHz. If the current flowing in the circuit is 2 mA, then find the voltage across the capacitor.
Question 7.
An ideal inductor takes a current of 10 A when connected to a 125 V, 50 Hz AC supply. A pure resistor across the same source takes 12.5 A. If the two are connected in series across a 100 √2 V, 40 Hz
supply, then calculate the current through the circuit.
Question 8.
An LCR series circuit containing a resistance of 120 Ω. has an angular resonance frequency of 4 x 10^5 rad s^-1. At resonance, the voltages across resistance and inductance are 60 V and 40 V,
respectively. Find the values of L and C.
Question 9.
A coil of inductive reactance 31 Ω has a resistance of 8 Ω. It is placed in series with a capacitor of capacitance reactance 25 Ω. The combination is connected to an ac source of 110 volts. Find the
power factor of the circuit.
Power faactor = 0.8
Question 10.
The power factor of an RL circuit is \(\frac { 1 }{ √2 }\). If the frequency of AC is doubled, what will be the power factor?
Question 11.
The instantaneous value of alternating current and voltage are given as i = \(\frac { 1 }{ √2 }\) sin (100 πt) A and e = \(\frac { 1 }{ √2 }\) sin(100 πt + \(\frac { π }{ 3 }\)) volt. Find the
average power in watts consumed in the circuit.
Common Errors and their Rectifications:
Common Errors:
1. Students sometimes may confuse the peak current and instantaneous value of current and emf.
2. They may confuse the area of R, L, and C with AC. The relation between current and induced emf.
1. Instantaneous current, i = I[0] sin tot Peak current, I[0] = √2 I[rms] Instantaneous emf, e = E[0] sin cor Peak emf, E[0] = √2 E[rms]
2. In Inductor: current is \(\frac { π }{ 2 }\) rad less than that of emf.
3. In Resistor: current and emf are the same phases.
4. In Capacitor: current is \(\frac { π }{ 2 }\) rad greater than that of emf.
Hope the data shared has shed some light on you regarding the Samacheer Kalvi Class 12th Physics Solutions Chapter 4 Electromagnetic Induction and Alternating Current Questions and Answers. Do leave
us your queries via the comment section and we will guide you at the earliest with help seeked. Stay connected with our website and avail the latest information in a matter of seconds.
|
{"url":"https://samacheerguru.com/samacheer-kalvi-12th-physics-solutions-chapter-4/","timestamp":"2024-11-10T00:00:27Z","content_type":"text/html","content_length":"258525","record_id":"<urn:uuid:a2a294e2-14c2-4942-b41e-3ca0175d5463>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00582.warc.gz"}
|
PX436-15 General Relativity
Introductory description
Einstein's general theory of relativity (GR) is the basis for our understanding of black holes and the Universe on its largest scales. In GR the Newtonian concept of a gravitational force is
abolished, to be replaced by a new notion, that of the curvature of space-time. This leads in turn to predictions of phenomena such as the bending of light and gravitational time dilation that are
well tested, and others, such as gravitational waves, which have only recently been directly detected.
The module starts with a recap of Special Relativity, emphasizing its geometrical significance. The formalism of curved coordinate systems is then developed. Einstein's equivalence principle is used
to link the two to arrive at the field equations of GR. The remainder of the module looks at the application of general relativity to stellar collapse, neutron stars and black-holes, gravitational
waves, including their detection, and finally to cosmology where the origin of the "cosmological constant" - nowadays called "dark energy" - becomes apparent.
Module aims
To present the theory of General Relativity and its applications in modern astrophysics, and to give an understanding of black-holes
Outline syllabus
This is an indicative module outline only to give an indication of the sort of topics that may be covered. Actual sessions held may differ.
The geometry of space-time and the invariant “interval” in special relativity; geodesics and equations of motion applied to circular orbits within the Schwarzschild metric; 4-vector formulation of
special relativity; metric of special relativity; the equivalence principle and local inertial frames; motivation for considering curved space-time; vectors and tensors in curved coordinate systems;
geodesic motion revisited; motion in almost-flat space-time: the Newtonian limit; the curvature and stress-energy tensors; how the metric is determined: Einstein's field equations; Schwarzschild
metric; observable consequences; black-holes; stability of orbits; extraction of energy; gravitational radiation and its detection; cosmology: the Robertson-Walker metric
Learning outcomes
By the end of the module, students should be able to:
• Explain the metric nature of special and general relativity, how the metric determines the motion of particles
• Undertake calculations involving the Schwarzschild metric
• Describe features of black-holes
• Demonstrate knowledge of current attempts to detect gravitational waves
Indicative reading list
BF Schutz A first course in general relativity, Cambridge University Press,
M.P Hobson, G. Efstathiou, A.N. Lasenby, General Relativity - An Introduction for Physicists, CUP,
L. D. Landau, E. M. Lifshit︠s︡, The Classical Theory of Fields
View reading list on Talis Aspire
The theory of General Relativity, like quantum theory, has been the result of collaboration between people working in physics and in mathematics with insights flowing in both directions. At its core
is a simple hypothesis about observations (the equivalence principle), which leads to a theory of gravity based on the differential geometry of curved spaces. This module covers the necessary
mathematics and computes some of the consequences of the theory for the physical Universe.
Subject specific skills
Knowledge of mathematics and physics. Skills in modelling, reasoning, thinking.
Transferable skills
Analytical, communication, problem-solving, self-study
Study time
Type Required
Lectures 30 sessions of 1 hour (20%)
Seminars (0%)
Private study 120 hours (80%)
Total 150 hours
Private study description
Self study
No further costs have been identified for this module.
You must pass all assessment components to pass the module.
Assessment group B2
Weighting Study time Eligible for self-certification
In-person Examination 100% No
Answer 3 questions
• Answerbook Pink (12 page)
• Students may use a calculator
Feedback on assessment
Personal tutor, group feedback
This module is Optional for:
• TMAA-G1PE Master of Advanced Study in Mathematical Sciences
□ Year 1 of G1PE Master of Advanced Study in Mathematical Sciences
□ Year 1 of G1PE Master of Advanced Study in Mathematical Sciences
• Year 1 of TMAA-G1PD Postgraduate Taught Interdisciplinary Mathematics (Diploma plus MSc)
• Year 1 of TMAA-G1P0 Postgraduate Taught Mathematics
• Year 1 of TMAA-G1PC Postgraduate Taught Mathematics (Diploma plus MSc)
• Year 4 of UPXA-F303 Undergraduate Physics (MPhys)
This module is Option list A for:
• Year 1 of TMAA-G1P0 Postgraduate Taught Mathematics
• Year 3 of UMAA-G100 Undergraduate Mathematics (BSc)
• Year 3 of UMAA-G103 Undergraduate Mathematics (MMath)
• Year 4 of UMAA-G101 Undergraduate Mathematics with Intercalated Year
This module is Option list B for:
• Year 4 of UPXA-FG33 Undergraduate Mathematics and Physics (BSc MMathPhys)
• Year 4 of UPXA-FG31 Undergraduate Mathematics and Physics (MMathPhys)
This module is Option list C for:
• UMAA-G105 Undergraduate Master of Mathematics (with Intercalated Year)
□ Year 3 of G105 Mathematics (MMath) with Intercalated Year
□ Year 4 of G105 Mathematics (MMath) with Intercalated Year
□ Year 5 of G105 Mathematics (MMath) with Intercalated Year
• UMAA-G103 Undergraduate Mathematics (MMath)
□ Year 3 of G103 Mathematics (MMath)
□ Year 4 of G103 Mathematics (MMath)
• UMAA-G106 Undergraduate Mathematics (MMath) with Study in Europe
□ Year 3 of G106 Mathematics (MMath) with Study in Europe
□ Year 4 of G106 Mathematics (MMath) with Study in Europe
|
{"url":"https://courses.warwick.ac.uk/modules/2022/PX436-15","timestamp":"2024-11-09T22:15:20Z","content_type":"text/html","content_length":"19901","record_id":"<urn:uuid:65fa02b9-a71f-44c0-9c56-26bfaba033c1>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00128.warc.gz"}
|
Chapter 4.10: Vector Addition and Subtraction: Analytical Methods
• Define the rules of vector addition and subtraction using analytical methods.
• Apply analytical methods to determine vertical and horizontal component vectors.
• Apply analytical methods to determine the magnitude and direction of a resultant vector.
Analytical methods of vector addition and subtraction employ geometry and simple trigonometry rather than the ruler and protractor of graphical methods. Part of the graphical technique is retained,
because vectors are still represented by arrows for easy visualization. However, analytical methods are more concise, accurate, and precise than graphical methods, which are limited by the accuracy
with which a drawing can be made. Analytical methods are limited only by the accuracy and precision with which physical quantities are known.
You will be using trigonometry in this section.
Figure 11. Trig Tour
Here is a very nice PHET simulation to help review those concepts.
Resolving a Vector into Perpendicular Components
Analytical techniques and right triangles go hand-in-hand in physics because (among other things) motions along perpendicular directions are independent. We very often need to separate a vector into
perpendicular components. For example, given a vector like Figure 1, we may wish to find which two perpendicular vectors,
Figure 1. The vector A, with its tail at the origin of an x, y-coordinate system, is shown together with its x- and y-components, A[x] and A[y]. These vectors form a right triangle. The analytical
relationships among these vectors are summarized below.
Note that this relationship between vector components and the resultant vector holds only for vector quantities (which include both magnitude and direction). The relationship does not apply for the
magnitudes alone. For example, if not true that the sum of the magnitudes of the vectors is also equal. That is,
If the vector A and its angle θ (its direction) are known. To find
Figure 2. The magnitudes of the vector components A[x] and A[y] can be related to the resultant vector A and the angle θ with trigonometric identities. Here we see that A[x]=A cos θ and A[y]=A sinθ.
Suppose, for example, that Chapter 3.1 Kinematics in Two Dimensions: An Introduction and Chapter 3.2 Vector Addition and Subtraction: Graphical Methods.
Figure 3. We can use the relationships A[x]=A cos θ and A[y]=A sinθ to determine the magnitude of the horizontal and vertical component vectors in this example.
Then A=10.3 blocks and θ=29.1°, so that
Calculating a Resultant Vector
If the perpendicular components A and direction θ of a vector from its perpendicular components
Figure 4. The magnitude and direction of the resultant vector can be determined once the horizontal and vertical components A[x] and A[y]have been determined.
Note that the equation
Equations A and θ to A and θ. Both processes are crucial to analytical methods of vector addition and subtraction.
Adding Vectors Using Analytical Methods
To see how to add vectors using perpendicular components, consider Figure 5, in which the vectors
Figure 5. Vectors A and B are two legs of a walk, and R is the resultant or total displacement. You can use analytical methods to determine the magnitude and direction of R.
If x-direction and then in the y-direction. Those paths are the x– and y-components of the resultant, R and θ using the equations
Step 1. Identify the x- and y-axes that will be used in the problem. Then, find the components of each vector to be added along the chosen perpendicular axes. Use the equations Figure 6, these
components are x-axis are θ[A] and θ[B], respectively.
Figure 6. To add vectors A and B, first determine the horizontal and vertical components of each vector. These are the dotted vectors A[x], A[y], B[x]and B[y] shown in the image.
Step 2. Find the components of the resultant along each axis by adding the components of the individual vectors along that axis. That is, as shown in Figure 7,
Figure 7. The magnitude of the vectors A[x] and B[x] add to give the magnitude R[x] of the resultant vector in the horizontal direction. Similarly, the magnitudes of the vectors A[y] and B[y]add to
give the magnitude R[y] of the resultant vector in the vertical direction.
Components along the same axis, say the x-axis, are vectors along the same line and, thus, can be added to one another like ordinary numbers. The same is true for components along the y-axis. (For
example, a 9-block eastward walk could be taken in two legs, the first 3 blocks east and the second 6 blocks east, for a total of 9, because they are along the same direction.) So resolving vectors
into components along common axes makes it easier to add them. Now that the components of
Step 3. To get the magnitude R of the resultant, use the Pythagorean theorem:
Step 4. To get the direction of the resultant:
The following example illustrates this technique for adding vectors using perpendicular components.
Example 1: Adding Vectors Using Analytical Methods
Add the vector Figure 8, using perpendicular components along the x– and y-axes. The x– and y-axes are along the east–west and north–south directions, respectively. Vector 53.0 m in a direction 20.0°
north of east. Vector 34.0 m in a direction 63.0° north of east.
Figure 8. Vector A has magnitude 53.0 m and direction 20.0^0 north of the x-axis. Vector B has magnitude 34.0 m and direction 63.0^0 north of the x-axis. You can use analytical methods to determine
the magnitude and direction of R.
The components of x– and y-axes represent walking due east and due north to get to the same ending point. Once found, they are combined to produce the resultant.
Following the method outlined above, we first find the components of x– and y-axes. Note that A=53.0 m, θ[A]=20.0°, B=34.0 m, and θ[B]=63.0°. We find the x-components by using
Similarly, the y-components are found using
The x– and y-components of the resultant are thus
Now we can find the magnitude of the resultant by using the Pythagorean theorem:
so that
Finally, we find the direction of the resultant:
Figure 9. Using analytical methods, we see that the magnitude of R is 81.2 m and its direction is 36.6^0 north of east.
This example illustrates the addition of vectors using perpendicular components. Vector subtraction using perpendicular components is very similar—it is just the addition of a negative vector.
Subtraction of vectors is accomplished by the addition of a negative vector. That is, the method for the subtraction of vectors using perpendicular components is identical to that for addition. The
components of x– and y-components of the resultant
and the rest of the method outlined above is identical to that for addition. (See Figure 10.)
Analyzing vectors using perpendicular components is very useful in many areas of physics, because perpendicular quantities are often independent of one another.
Figure 10. The subtraction of the two vectors shown in Figure 5. The components of -B are the negatives of the components of B. The method of subtraction is the same as that for addition.
Learn how to add vectors. Drag vectors onto a graph, change their length and angle, and sum them together. The magnitude, angle, and components of each vector can be displayed in several formats.
Please note that this simulation uses Flash so it might not work on all machines.
Figure 11. Vector Addition
• The analytical method of vector addition and subtraction involves using the Pythagorean theorem and trigonometric identities to determine the magnitude and direction of a resultant vector.
• The steps to add vectors
Step 1: Determine the coordinate system for the vectors. Then, determine the horizontal and vertical components of each vector using the equations
Step 2: Add the horizontal and vertical components of each vector to determine the components
Step 3: Use the Pythagorean theorem to determine the magnitude, R, of the resultant vector
Step 4: Use a trigonometric identity to determine the direction, θ, of R:
Conceptual Questions
1: Suppose you add two vectors
2: Give an example of a nonzero vector that has a component of zero.
3: Explain why a vector cannot have a component greater than its own magnitude.
Problems & Exercises
1: Find the following for path C in Figure 12: (a) the total distance traveled and (b) the magnitude and direction of the displacement from start to finish. In this part of the problem, explicitly
show how you follow the steps of the analytical method of vector addition.
Figure 12. The various lines represent paths taken by different people walking in a city. All blocks are 120 m on a side.
2: Find the following for path D in Figure 12: (a) the total distance traveled and (b) the magnitude and direction of the displacement from start to finish. In this part of the problem, explicitly
show how you follow the steps of the analytical method of vector addition.
3: Find the north and east components of the displacement from San Francisco to Sacramento shown in Figure 13.
Figure 13.
4: Solve the following problem using analytical techniques: Suppose you walk 18.0 m straight west and then 25.0 m straight north. How far are you from your starting point, and what is the compass
direction of a line connecting your starting point to your final position? (If you represent the two legs of the walk as vector displacements Figure 14, then this problem asks you to find their sum
Figure 14. The two displacements A and B add to give a total displacement R having magnitude R and direction θ.
Note that you can also solve this graphically. Discuss why the analytical technique for solving this problem is potentially more accurate than the graphical technique.
5: Repeat Exercise 4 using analytical techniques, but reverse the order of the two legs of the walk and show that you get the same final result. (This problem shows that adding them in reverse order
gives the same result—that is,
6: You drive 7.50 km in a straight line in a direction 15° east of north. (a) Find the distances you would have to drive straight east and then straight north to arrive at the same point. (This
determination is equivalent to find the components of the displacement along the east and north directions.) (b) Show that you still arrive at the same point if the east and north legs are reversed
in order.
7: Do Exercise 4 again using analytical techniques and change the second leg of the walk to 25.0 m straight south. (This is equivalent to subtracting
analytical method
the method of determining the magnitude and direction of a resultant vector using the Pythagorean theorem and trigonometric identities
Problems & Exercises
1: (a) 13 x 120 m = 1560 m = 1.56 km (b) 120 m east
2: (a) 13 x 120 m = 1560 m = 1.56 km (b) magnitude = 646 m at 21.8 ^ o North of East
3: North-component 87.0 km , east-component 87 km
4: 30.8 m , 35.8 degrees west of north
5: 30.8 m , 35.8 degrees west of north
7: (a)
|
{"url":"https://ecampusontario.pressbooks.pub/sccmathtechmath1/chapter/3-3-vector-addition-and-subtraction-analytical-methods/","timestamp":"2024-11-08T12:28:11Z","content_type":"text/html","content_length":"183879","record_id":"<urn:uuid:c388e9f9-7299-490e-be93-d487b13636f5>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00358.warc.gz"}
|
If <em>Pacific Rim</em> Followed Its Own Math, The Kaiju Would Have Won
Click to enkaijunate In the microscopic serenity of a test tube, bacteria multiply exponentially. Given enough food and space, the population will quickly double itself every few days or even hours.
High school biology students might remember the math that goes along with this growth—something in the form of P=e^rt (like the shampoo, professors urge students to remember). Interdimensional
monsters from the film Pacific Rim invade Earth in much the same way. In Pacific Rim, kaiju researcher Hermann Gottlieb is quoted as saying:
In the beginning the Kaiju attacks was spaced by twenty-four weeks, then twelve, then six, then every two weeks. The last one in Sydney…was a week. In four days, we could be seeing the Kaiju
every eight hours until they are coming every four minutes.
Gottlieb's timeframe of a looming apocalypse has the numbers he needs to predict when the next kaiju will emerge from the “breach.” We can do the same. Unfortunately, even giving Gottlieb the benefit
of the doubt, the data doesn’t make any sense. In the quote from Gottlieb, we learn that when kaiju first started appearing, the time between each “event” was half that of the previous interlude—24
weeks becomes 12 becomes 6. Then, most likely for dramatic effect, he skips ahead to where monsters are emerging from the sea every four minutes. If you assume the exponential shortening of time
between events, like how bacteria grow (only in reverse), you get something like this:
The graph above shows just how quickly the number of days between attacks shortens as a function of each emergence. For example, the fifth kaiju to stomp onto land would arrive only 11 days after the
fourth kaiju, which itself emerged three weeks after the third. Again, this mimics the relentless growth of bacteria. Both bacteria and the kaiju follow exponential equations—in this case
. That math hides a tremendously terrifying kaiju emergence rate. Imagine a checkerboard with standard checker pieces. On the first square you place one checker, half an inch tall. Following
exponential growth, on the second square of the checkerboard you place two checkers, now a total of one inch tall. On the third square you place four checkers, on the fourth you place eight, and so
on until you reach the 64^th square. Simply following exponential growth, the height of the checkers on the 64^th square should be about 73 trillion miles—almost twelve and a half light years up into
space, well past our Sun and a few stars. Kaiju emergence, based on what the film says, is a bit different. Instead of doubling the kaiju population every so often, the time between emergences is
like the checkers in reverse. You start with a large amount of time between attacks and before you know it there is a "Category 5" kaiju appearing every few nanoseconds. In short, if Gottlieb is on
the right track with his math, things get apocalyptic pretty quickly. But how long would humanity have? Adding up all the time between kaiju attacks, assuming that having a kaiju appear every minute
or less is certain doom, the Jaegers would have about 11 months before all hell broke lose. This is where the data and the film start disagreeing. According to the scholarly nerds who run wikis like
this, the time between the first kaiju attack and the last (when the breach was sealed with a nuclear detonation) was 11 years. If you follow the math once more, the time between kaiju attacks after
11 years of emerging would be far less than the Planck time–possibly the smallest amount of time we could ever measure. Also, during that 11 years there were 46 confirmed kaiju attacks, while the
math says less than 20 emergences would spell extinction. At least the novelization and official canon material of Pacific Rim are consistent in their disregard for their own mathematics. In an
analysis of kaiju attack frequency over at Nerdometrics, the author catalogued all explicitly stated kaiju attacks and the time between them. He was expecting to find something similar to what I
calculated—a simple and elegant exponential curve. Instead, he found this:
So, looking at the attacks individually, we don’t see anything even close to the math laid out by Gottlieb so forcefully in the movie. Pacific Rim still makes a good argument for cancelling the
apocalypse quickly—if kaiju really were emerging like bacteria divide there would be a huge problem. But apparently the film is conflicted on how much of a problem emergence really is. Gottlieb
insists that numbers are “as close as we get to the handwriting of God.” God gave you some bad information, Hermann -- Further Reading: Pacific Rim Physics (Part 1): A Rocket Punch is a Boeing 747 to
the FacePacific Rim Physics (Part 2): In a Nuclear Explosion Bubble at the Bottom of the OceanImage Credits: Pacific Rim poster by toybot studiosKaiju attack graph via Nerdometrics
|
{"url":"https://www.discovermagazine.com/the-sciences/if-empacific-rim-em-followed-its-own-math-the-kaiju-would-have-won","timestamp":"2024-11-09T20:29:21Z","content_type":"text/html","content_length":"109457","record_id":"<urn:uuid:ca575821-1bbc-43ba-9759-88d13d50528c>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00010.warc.gz"}
|
STOCHASTIC VOLATILITY MODEL ASSIGNMENT HELP - Finance Assignment Helpdesk
What is Stochastic Volatility Model Assignment Help Services Online?
Stochastic Volatility Model Assignment Help Services Online are academic assistance services that cater to students studying finance, economics, or related fields and require assistance with
understanding and solving problems related to the stochastic volatility model (SVM). The SVM is a popular mathematical model used in financial econometrics to capture the volatility of financial
assets, such as stocks, options, or currencies, which are known to exhibit time-varying and unpredictable volatility.
Stochastic Volatility Model Assignment Help Services Online provide expert guidance and support to students who may face challenges in comprehending the underlying concepts and techniques associated
with the SVM. The services may include assistance with topics such as the basic principles of stochastic volatility modeling, parameter estimation techniques, model validation, and application of SVM
in option pricing, risk management, and portfolio optimization.
The assignment help services are usually offered by experienced and knowledgeable tutors or subject matter experts who have a strong understanding of financial econometrics and statistical modeling.
They provide customized solutions to students’ assignments and projects, helping them gain a deeper understanding of the SVM and its applications in the real-world financial markets. Additionally,
the assignment help services ensure that the solutions provided are plagiarism-free, meaning they are original and do not contain any copied content.
In summary, Stochastic Volatility Model Assignment Help Services Online are valuable resources for students studying finance or related fields who require assistance with understanding and solving
problems related to the SVM. These services are provided by experienced tutors or experts, and the solutions offered are original and free from plagiarism.
Various Topics or Fundamentals Covered in Stochastic Volatility Model Assignment
Stochastic Volatility (SV) models are widely used in financial modeling to capture the dynamics of asset prices, particularly in options pricing, risk management, and portfolio optimization. These
models are characterized by their ability to capture the volatility dynamics of financial assets, which are known to be time-varying and exhibit stochastic behavior. In this assignment, we will cover
some of the fundamental concepts and topics related to Stochastic Volatility models.
Volatility: Volatility refers to the measure of the dispersion of returns for a given financial asset. It is a critical component in option pricing and risk management, as it determines the
uncertainty and risk associated with the underlying asset. SV models focus on capturing the stochastic nature of volatility, which implies that the volatility itself can change over time and is
subject to random fluctuations.
Stochastic Processes: Stochastic processes are mathematical models used to describe the random behavior of variables over time. In the context of SV models, stochastic processes are used to model the
dynamics of asset prices and volatilities. Some commonly used stochastic processes in SV models include Brownian motion, geometric Brownian motion, and Ornstein-Uhlenbeck process.
SV Model Formulation: SV models are typically formulated as stochastic differential equations (SDEs) that describe the evolution of asset prices and volatilities over time. These SDEs incorporate
random shocks or noise to capture the stochastic nature of volatility. The popular SV models include Heston model, GARCH model, and SABR model.
Estimation Techniques: Estimating the parameters of SV models from financial data is a challenging task due to the complex dynamics of asset prices and volatilities. Various estimation techniques are
used in SV modeling, such as maximum likelihood estimation (MLE), Bayesian estimation, and Kalman filtering. These methods allow for the estimation of model parameters based on historical data, which
can then be used for option pricing, risk management, and other financial applications.
Option Pricing: SV models are widely used in option pricing, as they can capture the stochastic nature of volatility, which has a significant impact on option prices. These models allow for the
pricing of options with non-constant volatility, which is a more realistic representation of financial markets. Popular option pricing methods based on SV models include Monte Carlo simulation,
finite difference methods, and closed-form solutions.
Risk Management: SV models are crucial in risk management, as they provide insights into the dynamics of asset prices and volatilities, which are essential for managing portfolio risk. SV models
allow for the estimation of risk measures such as value-at-risk (VaR) and conditional value-at-risk (CVaR), which are used to assess the risk of financial portfolios and determine appropriate risk
management strategies.
Model Calibration: Model calibration is an important step in SV modeling, which involves estimating the parameters of the SV model based on historical data. This step ensures that the SV model is
accurately representing the dynamics of the financial asset being modeled. Model calibration involves comparing the model’s predictions with historical data and adjusting the model parameters to
minimize the discrepancy between the model and the data.
In conclusion, Stochastic Volatility models are essential tools in financial modeling that allow for the modeling of time-varying and stochastic nature of volatility. Understanding the fundamentals
of SV models, including volatility, stochastic processes, model formulation, estimation techniques, option pricing, risk management, and model calibration, is crucial for effectively using these
models in various financial applications.
Explanation of Stochastic Volatility Model Assignment with the help of Unilever by showing all formulas
The Stochastic Volatility (SV) model is a popular financial model used to describe the dynamics of asset prices that exhibit time-varying volatility. It was introduced by Steven Heston in 1993 and
has been widely applied in options pricing, risk management, and other areas of quantitative finance.
The SV model assumes that the volatility of an asset, such as a stock or an index, is not constant but follows a stochastic process. In other words, the volatility of the asset is itself a random
variable that evolves over time. This makes the SV model different from traditional models, such as the Black-Scholes model, which assume a constant volatility.
The SV model can be expressed mathematically using the following equations:
Stochastic Differential Equation (SDE) for the Asset Price:
dS_t = μS_t dt + √(v_t) S_t dW_t^S
In this equation, S_t represents the asset price at time t, μ is the drift rate of the asset price, dt is the differential of time, v_t is the time-varying volatility (or variance) of the asset
price, dW_t^S is a standard Wiener process (random walk), and dS_t is the change in the asset price over a small time period dt.
SDE for the Volatility:
dv_t = κ(θ – v_t) dt + σ √(v_t) dW_t^v
In this equation, v_t represents the volatility of the asset price at time t, κ is the mean-reversion rate, θ is the long-term average volatility, σ is the volatility of volatility, dW_t^v is another
standard Wiener process, and dv_t is the change in the volatility over a small time period dt.
Correlation between Asset Price and Volatility:
dW_t^S dW_t^v = ρ dt
In this equation, ρ is the correlation between the asset price and its volatility, and dt is the differential of time.
The SV model allows for the estimation of parameters such as μ, κ, θ, σ, and ρ from historical data, which can be used for pricing options, risk management, and other financial applications.
Now let’s consider the application of the SV model to Unilever, a company that manufactures and sells consumer goods globally. Unilever operates in various markets and is exposed to risks such as
changes in commodity prices, exchange rates, and consumer preferences, which can impact its stock price volatility.
The SV model can help Unilever estimate the time-varying volatility of its stock price and manage the associated risks. For example, Unilever can use the SV model to estimate the volatility of its
stock price, taking into account factors such as changes in market conditions, consumer demand, and macroeconomic indicators. This information can be useful in pricing options on Unilever’s stock,
managing its risk exposure, and making strategic decisions.
The estimated parameters of the SV model, such as μ, κ, θ, σ, and ρ, can be used to generate forecasts of the stock price and its volatility, which can assist Unilever in making informed decisions
about its financial strategies, such as hedging, portfolio optimization, and risk management.
In conclusion, the Stochastic Volatility (SV) model is a useful financial model for capturing time-varying volatility in asset prices. It can be applied to Unilever, a global consumer goods company,
to estimate the volatility of its stock price and manage associated risks. By estimating parameters such as drift rate, mean-reversion rate, long-term average volatility, volatility of volatility,
and correlation, the SV model can provide valuable insights into the dynamics of Unilever’s stock price and its associated risks.
Need help in Stochastic Volatility Model Assignment Help Services Online, submit your requirements here. Hire us to get best finance assignment help.
|
{"url":"https://financeassignmenthelpdesk.com/stochastic-volatility-model-assignment-help/","timestamp":"2024-11-11T17:17:54Z","content_type":"text/html","content_length":"68123","record_id":"<urn:uuid:5ac11e7c-38a0-492a-88db-8d7403a3d55c>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00309.warc.gz"}
|
Linear (1st Order) Taylor Approximation - Expii
The 1st Taylor approximation of \(f(x)\) at a point \(x=a\) is just a linear (degree 1) polynomial, namely \[ P(x) = f(a) + f'(a)(x-a)^1.\] This make sense, at least, if \(f\) is differentiable at \
(x=a\): it's just another way to phrase the tangent line approximation at a point! The intuition is that \(f(a) = P(a)\) and \(f'(a) = P'(a)\): the "zeroth" and first derivatives match.
|
{"url":"https://www.expii.com/t/linear-st-order-taylor-approximation-4089","timestamp":"2024-11-10T17:55:23Z","content_type":"text/html","content_length":"5901","record_id":"<urn:uuid:ca115807-9b48-41a4-9168-d270c2b542d6>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00037.warc.gz"}
|
Kettelle's Algorithm
The study of parts supply optimization is best started at the place that first interested Jorge Fukuda; the example problem found in Chapter 7, Section 4 of & “Statistical Theory of Reliability and
Life Testing”, by Richard E. Barlow and Frank Proschan. I was fortunate to find a very inexpensive used copy of this older book (1975) in order to follow along.
Description of the Problem
This initial problem has the simplest set of conditions for an inventory optimization. The maintenance depot at a single base is required to supply spare parts for immediate installation on some
active service assembly (like an aircraft). The maintenance depot receives the failed part and can repair it in a known period of time. Such repaired parts are then available as valid spare parts for
another failure instance. The depot or base is considered an echelon, so this is a single echelon problem. Each part is singular, that is there are no sub-assemblies to the part needing repair, so
this is a single indenture problem as well.
In order to attempt to have parts available upon failure, there must be some stock inventory for each part; else any part in failure would have to wait for the entire part repair activity to
complete. In a pure statistical sense there can never be enough parts in initial stock inventory to always assure a part is on hand when needed. So, a measure of depot performance in its mission can
be the probability that a part will be on hand when needed. This is called the Fill Rate in the text. If no spare parts are held, this probability will be zero. As an infinite number of parts are
held, this probability will approach 100 percent.
For each part the Fill Rate will be the frequency of having a specific part on hand when a failure occurs divided by the sum of frequencies for all part fail rates. Since the denominator is a
constant for any collection of parts, a simplification is made to calculate only the numerator of the Fill Rate function; this is referred to this as the Fill Rate Numerator (FRN). Since part
failures are assumed to occur randomly, the Poisson distribution is used. The FRN will be a function of the part fail rate, the time required to repair each part, and the number of replacements
intended to be stocked as spares for this part. The individual FRN values are additive-separable, that is we can simply add the incremental FRN value for a specific part addition to the combined FRN
contributions of all other parts in an allocation. By magic of Palm’s Theorem this FRN equation is determined for us; we really do not have to derive it for ourselves. Extended reading in Jorge
Fukuda’s site on this point can be a plus.
Parts have a cost, and holding some number of them will consume a budget. So, the Kettelle algorithm is a way to identify the optimal performance measure for the depot given any specific budget for
spare parts that can be held.
For simplicity of example, this problem considers only 4 part types. Of course a real maintenance situation would likely consider hundreds or thousands of parts. The solution starts with no stocked
parts at all, resulting in a performance measure of zero. As parts are added to the initial stores, performance improves at some cost. As each part is added, an examination is made as to whether the
performance measure is the best at that cost or below. The set of varied part quantities that is optimal at a given cost level is considered an Un-dominated Allocation. As un-dominated allocations
are identified, all lesser performing combinations of part quantities are Excluded Allocations. An iterative process is performed to go through all likely quantities of each part in combination with
quantities of the other parts seeking the combinations of part quantities (the Allocations) that are Un-Dominated. Some part allocations may appear to be un-dominated in the order encountered, but
ultimately are found to be dominated by some later trial.
Since Kettelle’s algorithm tests all incremental additions of parts to identify Un-dominated part quantities, it is relatively computationally time consuming, perhaps too much so for practical use in
a large scale problem. However, it will identify all un-dominated allocations, and by study of this method we understand the concept of this kind of optimization.
The Kettelle Solution Using R
To initiate the solution, the example data has been placed in the xmetric package as a dataframe named Barlow.Proschan. A small function, FRN(), is an implementation of the Fill Rate Numerator
calculation that will be called repetitiously from the solution code.
The algorithm is completed in a function named Kettelle() taking a dataframe, having the same form as the example data, as primary argument. A default limit value may be altered to control the extent
of the evaluation. The output is a dataframe listing the Un-dominated Allocations containing the quantity of parts by type the total Cost and the resultant Performance measure for each allocation
selection. With an argument value of show=TRUE a graphic display will appear showing the Un-dominated Allocations as red points and the Excluded Allocations as blue. A name label can be provided as
an identifier in the chart title. Further, the performance measure may be selected to represent “FRN” (as default), “Fill Rate”, or “EBO” as to be discussed later.
The heart of this General Kettelle Algorithm (GKA, in the text) is a triple-nested loop going through all parts, all Un-dominated Allocations previously identified, and all quantities of each
successive part that generate incremental performance improvement greater than the set limit. As each new quantity of each part is encountered a series of logic blocks determine whether this addition
represents a new Un-dominated Allocation, and if so whether it identifies existing entries in the pending UdomAll dataframe as Excluded.
The result of interest to a depot manager is the sequence of Un-dominated Allocations. From this series, expressed as red dots here, the required budget for a target Fill Rate can be established, or
alternatively, the likely Fill Rate to be expected given a fixed budget. The specific stock quantities of each part required to achieve the optimal budgetary values is identified.
Viewing the Solution
Use of the Kettelle() function is very simple. First, some data must be identified. The small example dataset of 4 parts used by Barlow and Proschan has been incorporated into the xmetric package as
a dataframe named Barlow.Proschan.
The package library must be loaded once for an R session. Then the desired data built into the package must be loaded for use.
With the data loaded we can call the Kettelle function with some qualifying arguments. Here the data has been subveiwed to handle only parts 1 and 2.
parts1and2<-Kettelle(Barlow.Proschan[1:2,],data.name = "Parts 1 and 2",show=TRUE)
By assigning a label for the ouput object of this function, the UdomAll dataframe will be stored in session memory. This object can then be printed in the R console, from where it might be copied and
pasted into a spreadsheet for instance. Once the text of the output is placed in Excel a Data…Text_to_Columns command sequence will bring up a dialog to place the table in a more useable form.
Similar steps could be taken to examine parts 3 and 4, or parts 1 through 4 as Fukuda has demonstrated. Use of the "Fill Rate" performance measure simply alters the performance scale and values, the
graphic is unchanged.
Expected Back Orders (EBO)
The Fill Rate performance measure used so far in the Kettelle part allocation optimization does not reflect the full impact of not having a part upon demand, because it does not reflect the resulting
waiting time that may be required to eventually deliver the part. A better measure is defined as the Estimated Back Orders, or EBO. This measure correlates with the average waiting time for unfilled
demands. For the case where there are no spares held this will be the product of demand rate and repair turnaround time on an individual part basis. This product is referred to as the pipeline, and
is designated PL, below. As spares are held in increasing quantities, the EBO measure will decrease, approaching zero as spare holdings approach infinity.
After study through various texts, it has been found very difficult to generate this function in computer programmable terms given theoretical presentations. However, Jorge Fukuda has very elegantly
presented this function in terms of an Excel formula:
EBO = PL * POISSON (s, PL, FALSE ) + (PL − s) * [1− POISSON (s, PL,TRUE)]
, where
PL is the pipeline value determined by demand rate * turnaround time, and
s is the quantity of initial spares in stock.
As expected, the EBO is a function based on the Poisson distribution where the PL value is diminished as spare holdings increase. This EBO function has also been added to the xmetric package. The
source code for this function can be viewed by simply typing EBO at the R command prompt, assuming the xmetric library has been loaded. A test of this function now implemented in R permits
replication of a reference Table 2-1 in Sherbrooke’s text, giving further confidence in its use.
for(s in 0:10) {
As with the FRN performance measure, EBO contributions are additive. Incremental differences due to additions to stock are additive, albeit negative values.
The optimization of EBO performance is a minimization rather than the maximization required with FRN, so the Kettelle() function needs to handle this addition. This is done by adding the modified
code for the minimization as a separate block after a conditional test for the direction of optimization as of version 0.0.3 of the xmetric package.
The following scripts present analysis of all 4 parts in the example set according to the “EBO” and “Fill Rate” performance measures. The results appear to be a reflection about an ultimate
performance horizontal; however examination of the un-dominated allocation tables would reveal some differences in part selections. Alteration of the limit value between the two measures is required
to maintain similar extent of study.
data.name="Parts 3 thru 4",
performance="EBO", limit=10^-2.5,
data.name="Parts 3 thru 4",
performance="Fill Rate",
It is possible to derive a system measure of availability from the EBO measures for each part, but this depends on the number of active parts in the system; a bit of data that has not been provided
with our reference examples. Sherbrooke provides the following equation for an availability determination for a system of N aircraft each having a quantity of Z active parts.
This is but a launch point for further reusable part optimization studies. The code is open source and you are encouraged to review it. Someone else may come up with a more elegant set of logic steps
for identification of the un-dominated allocations; if so, please let us know. If this were to become part of a real production application, conversion of the code to C++ using Rcpp would be expected
to improve execution speed dramatically. As is, the code is quite responsive for this simple example.
|
{"url":"http://www.openreliability.org/kettelle/","timestamp":"2024-11-09T06:34:13Z","content_type":"text/html","content_length":"25471","record_id":"<urn:uuid:00eb70c6-1d93-4800-855a-48d241cee794>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00884.warc.gz"}
|
The Hidden Rigors of Data Science
Hidden Figures, the Oscar-nominated biopic, shares the story of three female African American mathematicians who were hired by NASA during the Space Race to work as “human computers,” performing and
verifying calculations by hand. These women—Katherine Johnson, Dorothy Vaughan, and Mary Jackson—were literally hidden away from the other NASA engineers. As a result, the outsized contributions they
made to the fields of aerospace engineering and space exploration went unrecognized for decades.
In one scene in the movie, Katherine Johnson proposes employing an iterative method to solve a problem about the trajectory of a spacecraft. Iterative methods are a common way computers solve
problems—starting with a guess and then improving the prediction or solution until it is “good enough.” Today, computers often solve problems using iterative techniques fueled by data. The methods by
which that data is collected, processed, analyzed, modeled, and used in decision-making are what we call data science, which in the future might likely be taught throughout K–12.
If you were to look at a key paper Katherine Johnson co-wrote in 1960, you’d probably think it looks like “rigorous math.” It features symbols, Greek letters, and lots of equations. But the “rigor”
in that paper wasn’t only in the mathematical computations. Johnson, Vaughan, and Jackson also analyzed test flight data and used the results to make decisions. Their mathematical ideas were applied
to solve problems that, prior to the first manned space flight, had never before existed. They needed to make precise estimates of potential error, apply theory to a new and ambiguous problem with
many unpredictable factors, account for multiple contingencies—and ultimately produce a numeric result.
What Do We Mean by Rigor?
When people talk about a rigorous class, they sometimes use the words rigorous and difficult interchangeably, or equate rigor with precision. These definitions don’t capture the ways in which
subjects like data science are rigorous. The idea of rigor as difficult or precise also doesn’t reveal what the concept of rigor often means in scientific research. In subjects like statistics and
data science, which provide quantitative foundations for trustworthy research in the sciences, a practice that’s rigorous needs to account for inevitable error and variability—which is different from
not having any error or variance at all. It’s important that teachers have a grasp of what rigor means in data science.
In mathematics, rigor is a prized concept because an argument that isn’t rigorous falls apart. The mathematical form of argumentation known as a proof (used to establish the validity of a
mathematical statement) is one standard of mathematical rigor. Proofs often start with axioms, things assumed to be true, but not proven (for example, “parallel lines do not cross”). The person
creating the proof then establishes from that axiom a logical chain of ideas that leads to a general result: a proven truth (proof) that can be used in theoretical and applied mathematics. The proof
includes a series of statements that follow each other in a logical order to show that if the axioms are true, the theorem must also be true. Understanding proofs is important in some K–12 math
classes, such as geometry.
Proofs are considered rigorous because they are thorough, sound, logical, reasoned, and “airtight.” There are no errors and no ways to find fault or “break” the proof so that the theorem isn’t true.
In high school mathematics, students are frequently tasked with finding answers to math problems that rely on axioms, theorems, and proofs. These problems are often structured so there is little to
no variation in their answers or in the “steps” used to arrive at them. As a result, many people assume that answers within the context of math problems can only exist on the narrow spectrum of right
or wrong.
In academic research, industry, and especially in life, people use data to solve problems that don’t always have a single, correct and precise answer.
In academic research, industry, and especially in life, people use data to solve problems that don’t always have a single, correct and precise answer. We often need answers that are “good enough”
instead of exact. For instance, you need to choose shoes in a size that fits well enough—which may mean fitting comfortably when you buy them, but can mean real comfort only after they have time to
“break in.”
To understand rigor in data science, one needs to comprehend what rigor looks like in all its applied domains. Data science instruction in secondary school should represent the way rigor is
understood and practiced in the STEM workforce, and many jobs outside of STEM, such as sports management, can also be data-intensive.
Four Ideas Illuminate Rigor in Data Science
In data science, rigor is manifested in ways that often appear different from what rigor looks like in high school courses like AP Calculus or AP Statistics. These courses are indeed rigorous. But
data science, which includes elements from those courses and more, is also rigorous in its own right. The true rigor of data science, much like the contributions made by the protagonists of Hidden
Figures, is often concealed, but four general ideas can shed light on it.
1. Rigorous data isn’t the same as “truth.”
Within the context of data science, rigorous data is authentic, unbiased, and truly representative. Often, the data needs to be a random sample, because it’s too time-consuming, expensive, or simply
impossible to collect all the data related to a phenomenon. If the data collection method isn’t rigorous, then everything else is unlikely to be useful. This kind of rigor isn’t always obvious in
educational settings because students are often only exposed to “clean data”—like in predefined experiments in a textbook. Students rarely encounter (or are challenged to collect) data sets
containing three or more variables; data sets that contain quantitative and qualitative variables; or data composed of words, pictures, or sound. But these types of information feed the
decision-making processes in industry and government.
2. Decision-making requires both data and models.
A fundamental element of data science is utilizing data along with a combination of technology, mathematics, and statistics, to gain insight or make decisions—often using a model. You might be
familiar with a simple model, like a line y = 4x, which describes a relationship between two variables x and y that has a slope of 4. This model says that every time you increase x by 1, you increase
y by 4, otherwise known as a slope of 4. Some data can be described well enough by a linear model. For example, if you take a roomful of people and compare their arm spans (x) to their heights (y),
in general all the “points” of these measurements will line up pretty well. If you then draw a line that’s best for hitting all the points, you won’t hit them all, but you’ll see you have a pretty
good linear model—a rigorous option available to describe the data well without doing something overly complicated to hit all the points. In STEM, we learn a whole toolkit of mathematical functions
like lines, exponentials, and trig functions that can model many phenomena in science and in daily life.
However, much of real life is multivariable. For example, if you want to choose a restaurant, you’ll likely consider many variables: the type of food, location, cost, hours of operation, and ratings.
If you’re with a group, you’ll need more than just data on various restaurants; you’ll need to know the preferences of your group. People have a lot of practice doing these types of calculations in
their heads, while balancing priorities and constraints, such as others’ preferences. We don’t have to write down a formula to decide where to eat. But for a business that has to make decisions
involving thousands of calculations with different variables and constraints, an informal solution isn’t going to work. A model is needed. In data science, many models are still based on linear
models, but with many more variables and many more equations. And just like with the arm span and height example, where you needed enough data points to conclude that they were lining up, when you
have more variables and more equations, you’re going to need more data. Handling all this data requires more sophisticated tools.
This is why to study data science, people may start with spreadsheets, but eventually are going to want to use tools that run on computers that can store more complicated information and process it.
The reality is that some models these days are getting so complicated that it’s difficult for humans to completely understand how the models work—and when they are going to fail. To develop
transparent, trustworthy data science and artificial intelligence models, our STEM students are going to need to make connections between the way those algorithms work and the foundational concepts
in STEM. Most of us won’t be the ones looking under the hood of those algorithms, but we will still need to test and double check the output and decide whether the solution provided should be
3. Everyone needs some grasp of data science—and programming.
While all students won’t be interested in becoming the algorithm developers of the future, all young people will use these algorithms, such as when they rely on an online recommendation system to
choose a restaurant or navigate the route to get there. Having some insight into the data science processes used to develop algorithms and an idea of how they work and how to test their answers for
accuracy (and whether any solutions provided should be trusted) is an essential skill many of us lack.
AI may change how we program computers, but a conceptual understanding of how programming works, what it can and can’t do, and how to test a computer program for flaws will remain essential skills.
Teachers may wonder whether all K–12 students need to learn computer science. We would answer that all students should have an understanding of how programming works. Artificial intelligence (AI) may
change how we program computers, but a conceptual understanding of how programming works, what it can and can’t do, and how to test a computer program for flaws will remain essential skills. And for
students going into the data science profession, learning some computer programming skills is a must.
The incorporation of technology doesn’t reduce the rigor involved when students are learning data science. Computers and software allow us to explore data, visualize data (data using multiple
variables that can hold numbers, words, sounds, or even pictures), and simulate experiments. In this context, rigor requires using technology appropriately, learning how to iterate through versions
of a solution, and evaluating whether you can declare victory because your solution is “good enough.”
4. Data science demands understanding of context.
Data science requires that students understand the domain of the data and the context of the problem. For the mathematicians portrayed in Hidden Figures, that context was space flight. Katherine
Johnson wasn’t an astronaut. But she needed to develop a working knowledge of the discipline of aerospace engineering as it related to the context she was working within. Her data analysis and
mathematical results had to be useful for astronauts and accurate enough to help them safely return home.
Data science skills could be taught in the context of many subjects. Stand-alone data science courses were first taught as an elective in a limited number of high schools.Currently, in our state of
North Carolina, data science appears in computer science requirements, while in neighboring Virginia, it appears in math requirements. However a state or district ushers in data science courses,
offering them can provide students opportunities to engage in rigorous data science practices and prepare to navigate the complexities in a society that’s become dependent on data.
In traditional mathematics instruction, students are given steps to solve specific problem types and expected to learn how to solve these problems using only those steps. However, in data science
education, it’s imperative that students move beyond “toy problems” that lack real data and that rely on predefined steps. Students need to tackle scenarios that reflect authentic challenges—and this
means exposing students to datasets where they can genuinely practice the key components of data science.
It will also require sharpening students’ critical thinking. Regardless of how educators think data science can or should be taught in K–12 education, we know college professors want to teach
students who are adept at critical thinking and analysis. The workforce requires the same skills. This will be even more true as artificial intelligence allows repetitive and menial tasks to be
automated. Students can develop critical thinking and analysis skills by using a data-science methodology that accounts for the major phases of problem solving: exploration, prediction, and
inference. Each phase of this methodology in and of itself is connected to mathematics and aligned to the thoughtful, responsible, and rigorous use of technology.
Infusing Data Science into K–12 Education
The reality is, not enough students have access to computer science courses, even as AI is changing the nature of computation and programming. We also can’t redirect all math or science teachers into
data science because we still need to teach the fundamentals of core STEM subjects. Fortunately, there are lots of ways to approach data science in current instruction. Teachers in many subjects can
infuse certain lessons or units with data explorations and data storytelling so students can make sense of data, understand the context around it, and use data to gain insight about the world and
facilitate responsible decision-making. For example, a history teacher teaching about the industrial revolution could explore many kinds of data with students, such as the change in population around
that time, how the foods people ate changed, and how the types of pollution in the environment shifted. Students could explore data in some aspect that interests them and—with support from their math
teacher—make connections through visualizations and modeling.
In a world where algorithms make recommendations about everything from who gets a loan to who gets an organ transplant, it’s imperative that we give all students a foundation in data science.
In a world where algorithms make recommendations about everything from who gets a loan to who gets an organ transplant, it’s imperative that we introduce all students to data science, giving them a
foundation to navigate data.
Even though Katherine Johnson and her colleagues were “hidden,” the evidence of their contributions to the space race became visible, at least to NASA engineers, once a manned rocket safely returned
to Earth from outer space. Eventually a big-screen motion picture spread awareness of their work, inspiring a new generation of programmers and scientists. If we want to reveal the possibilities of
data science to our students, we’ll all need to grow and explore new forms of rigor.
Copyright © 2024 Mahmoud Harding & Rachel Levy
|
{"url":"https://www1.ascd.org/el/articles/the-hidden-rigors-of-data-science","timestamp":"2024-11-14T10:18:55Z","content_type":"application/xhtml+xml","content_length":"326349","record_id":"<urn:uuid:aec5897d-cefc-444e-983c-6ebb3fed5c7a>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00225.warc.gz"}
|
Radiometric dating physical science definition
Radioactive dating in carbon clock because pre-modern carbon clock because it works. Belfer-Cohen, also known as potassium, also called. In science of these https://complejidadhumana.com/
dating-apps-that-work-in-dubai/, absolute dating is another. As the age cannot be used to estimate the rates of dating methods, isotopes are illustrated with radiometric dating, and tell if nine
consistent results. Want to date archaeological materials from nuclear lifetimes allows radiometric dating, 2003. Radiocarbon dating or tree-ring dating powerpoint covers the earth. From radiometric
dating to explain: radiometric dating definition find a half-life; radiometric dating. But misguided people forget the science; to explain the.
Radiometric dating physical science definition
Marvin lanphere, for this reason, we have a rock layers. Video: define the movements of a measure of geology as radioactive decay. An interesting and balance nuclear decay; to estimate the 1904 nobel
prize in. She especially likes to find out that you can be presented; stratigraphy, medicine, archeology. Each nucleus loses energy by fluoride ions from radiometric dating: during click here clever
use of earth's. Marvin lanphere, a new scientific methods were laid shortly. By analyzing the age dating physical science bed and neutrons radioactive dating find a few flaws tips statement. From the
vernadsky institute of chicago, and michael allaby. Most commonly used when the dynamic changes of the first graph the concentration of earth itself. Using the age for blood flow monitoring,
sedimentary, used to date these observations are allegedly. Reported dates on events, which it is ernest rutherford considered to a. Paleontology and testable, including half-life is defined by
observing the most https://complejidadhumana.com/ advances. Belfer-Cohen, science for determining an object, also used to relative age of previously living things. These strata to carbon has happened
is radiocarbon dating, biostratigraphy, pangaea, but the earth sciences ailsa allaby and physical science definition of.
Earth science definition of radiometric dating
Sign process of geological planetary sciences ailsa allaby and daughter products to create a difficult matter. Subduction means ancient date the ages we be correlated with the radioactive isotope
carbon-14. Dating methods is about 3.5 billion years. Thermoluminescence tl dating definition, second, young-earth creationism. To the zircons suggests that 5730 years. Use of the solar wind or
fossils. Radiometric dating method of earth for radiometric dating which is also. Other objects based on the age was employed at its composition, scientists in terms of 1950 ad or radiometric dating
has long ago rocks. Carbon 14 dating: geologic materials that deal with the difference between absolute dating. Learn the kinds of the half-life is defined as the age radiometric dating is who means
of old-earth geology.
What is radiometric dating science definition
Receive our use radiometric dating relative dating with more relationships. Archaeology and space atmospheric sciences, 000 years, media, 2015. Explain what is scientifically valid, radiometric
dating, most people think that it can measure the decaying matter is so large underground chambers. He is a way to determine the original substance is compared to infer the most scientists today.
Scientist count back and absolute dating, any argon. Does radiometric dating methods were incorporated into useful science bed and radiometric dating; building chronology in a rock. That the rocks
scientists to find more ways to find the structure of certain radioactive elements to. We see sedimentary rock that most absolute age?
Science definition radiometric dating
Using the process to the ages of radiometric dating? Sometimes called the amount of radiometric dating methods measure the american. That radioactive dating is a comparison between. They will not use
certain radioactive dating is the rate of the discovery of sentences with more. Science technology in this is pursuing a date materials that contain carbon 14 to date materials such as rocks or
personals site. Explain further what carbon dating is largely done on the age? May 20, radioactive decay occurs at an organism died by scientists can. In a sentence from the technique which have
unfortunately gotten the amount of an unstable nucleus, this is the number one scientific dating still exist. Is - find the ratio of decay of ancient. Free online dating, 2019 radiometric dating-the
process involving. They will not rely on the real answers. There are two uranium isotopes are several vital assumptions drive the age of organic matter is single and meet a dictionary of radioactive
|
{"url":"https://complejidadhumana.com/radiometric-dating-physical-science-definition/","timestamp":"2024-11-07T07:28:04Z","content_type":"text/html","content_length":"64135","record_id":"<urn:uuid:ba3a469e-ac7e-4c46-9217-ea3537c1914d>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00221.warc.gz"}
|
Basics of Trees Algorithms | Learn Data Structure & Algorithms | Skillshike
Basics of Trees Algorithms
Hi everyone, inside this article we will see the concept about Basics of Trees Algorithms.
In computer science, a tree is a data structure composed of nodes connected by edges, with each node having a parent node and zero or more children nodes. Trees are often used to represent
hierarchical relationships between elements, such as the structure of a file system or the organization of a company. They are also used in algorithms for searching, sorting, and optimization.
Trees are commonly used in algorithms and data structures due to their ability to efficiently represent hierarchical relationships.
Key Points of the Tree Data Structure
Here are some key terms that are commonly used in the context of trees in data structures and algorithms:
1. Node: A node is a fundamental element of a tree, which contains a value or data, and pointers to its child nodes.
2. Root: The root node is the topmost node in a tree, and it has no parent node.
3. Parent: A node that has one or more child nodes is called a parent node.
4. Child: A node that has a parent node is called a child node.
5. Leaf: A leaf node is a node that has no child nodes.
6. Depth: The depth of a node is the number of edges from the root node to that node.
7. Height: The height of a node is the number of edges on the longest path from that node to a leaf node.
8. Subtree: A subtree is a portion of a tree that is itself a tree, which consists of a node and all its descendant nodes.
9. Binary tree: A binary tree is a tree data structure in which each node has at most two children.
10. Binary search tree (BST): A binary search tree is a binary tree in which the left subtree of a node contains only nodes with values less than the node’s value, and the right subtree contains only
nodes with values greater than the node’s value.
11. Balanced tree: A balanced tree is a tree in which the heights of the left and right subtrees of each node differ by at most one.
12. Traversal: Traversal refers to the process of visiting all the nodes in a tree in a specific order.
13. Pre-order traversal: In pre-order traversal, we visit the root node first, then recursively traverse the left subtree, and then the right subtree.
14. In-order traversal: In in-order traversal, we recursively traverse the left subtree first, then visit the root node, and then recursively traverse the right subtree.
15. Post-order traversal: In post-order traversal, we recursively traverse the left subtree first, then recursively traverse the right subtree, and finally visit the root node.
These are some of the key terms that are commonly used in the context of trees in data structures and algorithms. Understanding these terms is essential for working with tree data structures and
developing tree-based algorithms.
Usage of Trees Algorithms
Tree algorithms are commonly used in computer science for a variety of tasks, including:
1. Storing and searching data: Trees are often used to store data in a way that allows for efficient searching and retrieval. Binary search trees, AVL trees, and red-black trees are commonly used
for this purpose.
2. Parsing and manipulating text: Tries and suffix trees are commonly used for parsing and manipulating text. Tries are particularly useful for implementing autocomplete features in text editors and
search engines.
3. Implementing file systems: B-trees and B+ trees are commonly used for organizing and storing files in file systems. They allow for efficient retrieval and modification of files stored on disk.
4. Implementing network routing algorithms: Trees are often used to implement network routing algorithms. For example, spanning trees can be used to find the shortest path between two points in a
5. Game AI: Trees can be used to implement game AI algorithms, such as decision trees and minimax trees. These algorithms are used to make decisions and select the best moves in games like chess and
Overall, tree algorithms are a powerful tool for solving a wide variety of problems in computer science. By organizing data in a hierarchical structure, trees can provide efficient access and
manipulation of large amounts of data.
Types of Tree Algorithms
There are many types of tree algorithms in computer science, each with its own strengths and weaknesses. Here are some of the most commonly used types of tree algorithms:
1. Binary Search Trees (BSTs): BSTs are a type of tree data structure that is used to store and search data in an ordered manner. In a BST, each node has at most two children, with nodes in the left
subtree having values less than the root and nodes in the right subtree having values greater than the root.
2. AVL Trees: AVL trees are a type of self-balancing binary search tree that maintains a balance factor for each node. The balance factor is the difference between the heights of the left and right
subtrees. AVL trees use rotations to maintain the balance of the tree and keep it height-balanced.
3. Red-Black Trees: Red-Black trees are another type of self-balancing binary search tree that use a set of rules to maintain balance. In a Red-Black tree, each node is either red or black, and the
tree is balanced such that no path from the root to a leaf node is more than twice as long as any other path.
4. B-trees: B-trees are a type of self-balancing tree that are designed to efficiently store and retrieve data from disk or other secondary storage devices. B-trees are commonly used in databases
and file systems.
5. B+ trees: B+ trees are a variant of B-trees that are optimized for use in databases. In a B+ tree, all the data is stored in the leaf nodes, while the internal nodes contain only keys. This makes
B+ trees efficient for range queries and sequential scans of data.
6. Trie: Trie is a tree data structure that is used to store strings and allows for efficient prefix-based searching of strings. Each node in the trie represents a prefix of a string, and the child
nodes represent the possible next characters in the string.
7. Segment Trees: Segment trees are a type of binary tree that are used to efficiently answer range queries over an array or other data structure. Segment trees can be used to perform operations
like finding the sum of elements in a range or finding the minimum or maximum element in a range.
These are just a few of the many types of tree algorithms that are used in computer science. Each type has its own unique properties and is suited to solving different types of problems.
We hope this article helped you to understand Basics of Trees Algorithms in a very detailed way.
Online Web Tutor invites you to try Skillshike! Learn CakePHP, Laravel, CodeIgniter, Node Js, MySQL, Authentication, RESTful Web Services, etc into a depth level. Master the Coding Skills to Become
an Expert in PHP Web Development. So, Search your favourite course and enroll now.
If you liked this article, then please subscribe to our YouTube Channel for PHP & it’s framework, WordPress, Node Js video tutorials. You can also find us on Twitter and Facebook.
|
{"url":"https://skillshike.com/cs/data-structures-and-algorithms/basics-of-trees-algorithms/","timestamp":"2024-11-02T12:29:25Z","content_type":"text/html","content_length":"213885","record_id":"<urn:uuid:491591a0-21af-470f-8f34-39a152ba5345>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00758.warc.gz"}
|
Multiplication With Regrouping Worksheets
Math, specifically multiplication, creates the foundation of many scholastic techniques and real-world applications. Yet, for many learners, grasping multiplication can position a challenge. To
address this difficulty, educators and parents have actually embraced a powerful tool: Multiplication With Regrouping Worksheets.
Intro to Multiplication With Regrouping Worksheets
Multiplication With Regrouping Worksheets
Multiplication With Regrouping Worksheets -
Welcome to The 2 digit by 2 digit Multiplication with Grid Support Including Regrouping A Math Worksheet from the Long Multiplication Worksheets Page at Math Drills This math worksheet was created or
last revised on 2023 08 12 and has been viewed 1 174 times this week and 1 536 times this month
We have worksheets in our grade 3 and grade 4 math sections to practice multiplication with regrouping Grade 3 worksheets for multiplying 1 digit by 3 digit numbers Students in grade 3 practice
multiplying in columns with carrying 1 digit by 3 digit multiplication practice for grade 4 students
Relevance of Multiplication Practice Understanding multiplication is critical, laying a strong structure for sophisticated mathematical concepts. Multiplication With Regrouping Worksheets offer
structured and targeted technique, fostering a deeper comprehension of this essential math procedure.
Evolution of Multiplication With Regrouping Worksheets
2 Digit Multiplication With Regrouping Worksheets FREE
2 Digit Multiplication With Regrouping Worksheets FREE
These regrouping multiplication worksheets are free and printable Teachers parents and students can print and make copies 2 Digits by 1 Digit Some Regrouping Multiplication A 2 Digits by 1 Digit Some
Regrouping Multiplication A Answers 2 Digits by 1 Digit Some Regrouping Multiplication B
Multiplying 2 digits by 1 digit with partial products Multiply using partial products Multiply without regrouping Multiply with regrouping Multiplying 3 digit by 1 digit Multiplying 3 digit by 1
digit regrouping Math 4th grade Multiply by 1 digit numbers
From traditional pen-and-paper workouts to digitized interactive formats, Multiplication With Regrouping Worksheets have evolved, dealing with varied learning styles and preferences.
Kinds Of Multiplication With Regrouping Worksheets
Standard Multiplication Sheets Basic exercises concentrating on multiplication tables, assisting students develop a strong math base.
Word Problem Worksheets
Real-life situations integrated right into troubles, boosting important thinking and application abilities.
Timed Multiplication Drills Examinations created to enhance speed and precision, assisting in quick psychological mathematics.
Benefits of Using Multiplication With Regrouping Worksheets
Two Digit Multiplication With Regrouping Valentine s Day Theme
Two Digit Multiplication With Regrouping Valentine s Day Theme
Multiplication 2 x 2 Digit WITH Regrouping 5 Multiply Worksheet Packet by AutismBehaviorandTeachingTools 5 0 31 5 00 PDF This packet contains 5 pages of 2 x 2 Digit Multiplication problems with
Regrouping Worksheets Problems are illustrated on graph paper background that assists students with keep numbers in line
Multiplication with regrouping Multiplication with regrouping Jeyann Gomez Member for 2 years 7 months Age 8 10 Level Grade 3 Language English en ID 1166802 13 07 2021 Country code PH Country
Philippines School subject Math 1061955 Main content Multiplication with Regrouping 2007622 Multiplication with regrouping
Boosted Mathematical Abilities
Consistent practice develops multiplication efficiency, boosting total math abilities.
Enhanced Problem-Solving Talents
Word troubles in worksheets establish analytical thinking and approach application.
Self-Paced Knowing Advantages
Worksheets fit private discovering speeds, promoting a comfy and versatile knowing environment.
Exactly How to Produce Engaging Multiplication With Regrouping Worksheets
Integrating Visuals and Colors Lively visuals and shades capture attention, making worksheets visually appealing and involving.
Consisting Of Real-Life Circumstances
Connecting multiplication to daily situations adds importance and practicality to workouts.
Customizing Worksheets to Different Ability Degrees Tailoring worksheets based on varying efficiency degrees makes certain comprehensive learning. Interactive and Online Multiplication Resources
Digital Multiplication Equipment and Gamings Technology-based resources supply interactive learning experiences, making multiplication appealing and pleasurable. Interactive Web Sites and Apps Online
platforms provide varied and easily accessible multiplication technique, supplementing typical worksheets. Tailoring Worksheets for Different Discovering Styles Aesthetic Students Aesthetic aids and
representations aid understanding for learners inclined toward aesthetic discovering. Auditory Learners Verbal multiplication problems or mnemonics cater to learners that comprehend ideas via
auditory ways. Kinesthetic Learners Hands-on activities and manipulatives sustain kinesthetic students in understanding multiplication. Tips for Effective Implementation in Understanding Consistency
in Practice Normal practice strengthens multiplication abilities, advertising retention and fluency. Balancing Rep and Selection A mix of repetitive workouts and varied problem styles keeps passion
and understanding. Giving Constructive Responses Comments help in identifying areas of enhancement, urging ongoing progression. Challenges in Multiplication Practice and Solutions Motivation and
Involvement Difficulties Dull drills can bring about uninterest; innovative techniques can reignite motivation. Overcoming Anxiety of Mathematics Adverse assumptions around mathematics can impede
progression; creating a positive understanding atmosphere is necessary. Impact of Multiplication With Regrouping Worksheets on Academic Performance Researches and Research Study Findings Research
study shows a positive connection in between constant worksheet use and enhanced math performance.
Multiplication With Regrouping Worksheets emerge as flexible tools, fostering mathematical efficiency in learners while suiting varied discovering designs. From fundamental drills to interactive
online sources, these worksheets not just boost multiplication abilities yet additionally advertise vital thinking and problem-solving capacities.
Multiplication With Regrouping Worksheets Pdf Db excel
Lots Of 2 digit Multiplication With Regrouping Practice
Check more of Multiplication With Regrouping Worksheets below
10 Multiplication With Regrouping Worksheets
3rd Grade Online Educational Resources For 3rd Graders Kids Page 25
Pin On TpT Lucky Dip
Multiplication Problems 1 X 3 Digit Regroup Tens And Hundreds Mr R
Multiplication With Regrouping Worksheets Worksheets For Kindergarten
Multiplication Problems 1 X 2 Digit No Regrouping Mr R s World Of Math
How to Multiply in Columns with Regrouping K5 Learning
We have worksheets in our grade 3 and grade 4 math sections to practice multiplication with regrouping Grade 3 worksheets for multiplying 1 digit by 3 digit numbers Students in grade 3 practice
multiplying in columns with carrying 1 digit by 3 digit multiplication practice for grade 4 students
Multiplication with Regrouping Interactive Worksheet Education
Multiplication with Regrouping Is your math student ready to tackle two digit multiplication with regrouping With step by step instructions at the top of the page this worksheet offers over 20 multi
digit multiplication problems to help get your child comfortable with this skill set
We have worksheets in our grade 3 and grade 4 math sections to practice multiplication with regrouping Grade 3 worksheets for multiplying 1 digit by 3 digit numbers Students in grade 3 practice
multiplying in columns with carrying 1 digit by 3 digit multiplication practice for grade 4 students
Multiplication with Regrouping Is your math student ready to tackle two digit multiplication with regrouping With step by step instructions at the top of the page this worksheet offers over 20 multi
digit multiplication problems to help get your child comfortable with this skill set
Multiplication Problems 1 X 3 Digit Regroup Tens And Hundreds Mr R
3rd Grade Online Educational Resources For 3rd Graders Kids Page 25
Multiplication With Regrouping Worksheets Worksheets For Kindergarten
Multiplication Problems 1 X 2 Digit No Regrouping Mr R s World Of Math
2 By 3 Digit Multiplication Worksheets Free Printable
Multiplication With Regrouping Worksheets Free Times Tables Worksheets
Multiplication With Regrouping Worksheets Free Times Tables Worksheets
3 Digit By 1 Digit Multiplication With Regrouping Worksheet Times
FAQs (Frequently Asked Questions).
Are Multiplication With Regrouping Worksheets ideal for any age teams?
Yes, worksheets can be tailored to various age and ability degrees, making them adaptable for various learners.
Just how typically should students practice using Multiplication With Regrouping Worksheets?
Consistent method is essential. Normal sessions, ideally a few times a week, can produce considerable renovation.
Can worksheets alone improve math skills?
Worksheets are a valuable tool yet ought to be supplemented with different learning techniques for detailed ability growth.
Are there on-line platforms providing complimentary Multiplication With Regrouping Worksheets?
Yes, numerous educational internet sites supply open door to a vast array of Multiplication With Regrouping Worksheets.
How can parents support their kids's multiplication practice at home?
Urging constant technique, giving help, and developing a favorable learning atmosphere are advantageous steps.
|
{"url":"https://crown-darts.com/en/multiplication-with-regrouping-worksheets.html","timestamp":"2024-11-12T07:10:48Z","content_type":"text/html","content_length":"28540","record_id":"<urn:uuid:3970f0f8-9813-4548-861d-888665969808>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00021.warc.gz"}
|
Library Of American University Of Madaba
In their approach to Earth dynamics the authors consider the fundamentals of Jacobi Dynamics (1987, Reidel) for two reasons. First, because satellite observations have proved that the Earth does not
stay in hydrostatic equilibrium, which is the physical basis of today’s treatment of geodynamics. And secondly, because satellite data have revealed a relationship between gravitational moments and
the potential of the Earth’s outer force field (potential energy), which is the basis of Jacobi Dynamics. This has also enabled the authors to come back to the derivation of the classical virial
theorem and, after introducing the volumetric forces and moments, to obtain a generalized virial theorem in the form of Jacobi’s equation. Thus a physical explanation and rigorous solution was found
for the famous Jacobi’s equation, where the measure of the matter interaction is the energy. The main dynamical effects which become understandable by that solution can be summarized as follows: •
the kinetic energy of oscillation of the interacting particles which explains the physical meaning and nature of the gravitation forces; • separation of the shell’s rotation of a self-gravitating
body with respect to the mass density; difference in angular velocities of the shell rotation; • continuity in changing the potential of the outer gravitational force field together with changes in
density distribution of the interacting masses (volumetric center of masses); • the nature of the precession of the Earth, the Moon and satellites; the nature of the rotating body’s magnetic field
and the generation of the planet’s electromagnetic field. As a final result, the creation of the bodies in the Solar System having different orbits was discussed. This result is based on the
discovery that all the averaged orbital velocities of the bodies in the Solar System and the Sun itself are equal to the first cosmic velocities of their proto-parents during the evolution of their
redistributed mass density. Audience The work is a logical continuation of the book Jacobi Dynamics and is intended for researchers, teachers and students engaged in theoretical and experimental
research in various branches of astronomy (astrophysics, celestial mechanics and stellar dynamics and radiophysics), geophysics (physics and dynamics of the Earth’s body, atmosphere and oceans),
planetology and cosmogony, and for students of celestial, statistical, quantum and relativistic mechanics and hydrodynamics.
There are no comments for this item.
Log in to your account
to post a comment.
|
{"url":"http://library.aum.edu.jo/cgi-bin/koha/opac-detail.pl?biblionumber=27300","timestamp":"2024-11-01T22:08:17Z","content_type":"application/xhtml+xml","content_length":"21407","record_id":"<urn:uuid:b9517ad4-0d09-4ba9-9c4c-aeca8a48553f>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00016.warc.gz"}
|
The presence of symmetries of binary programs typically degrade the performance of branch-and-bound solvers. In this article, we derive efficient variable fixing algorithms to discard symmetric
solutions from the search space based on propagation techniques for cyclic groups. Our algorithms come with the guarantee to find all possible variable fixings that can be derived from … Read more
Discrete Optimal Transport with Independent Marginals is #P-Hard
We study the computational complexity of the optimal transport problem that evaluates the Wasserstein distance between the distributions of two K-dimensional discrete random vectors. The best known
algorithms for this problem run in polynomial time in the maximum of the number of atoms of the two distributions. However, if the components of either random vector … Read more
The polytope of binary sequences with bounded variation
We investigate the problem of optimizing a linear objective function over the set of all binary vectors of length n with bounded variation, where the latter is defined as the number of pairs of
consecutive entries with different value. This problem arises naturally in many applications, e.g., in unit commitment problems or when discretizing binary … Read more
|
{"url":"https://optimization-online.org/2022/03/page/5/","timestamp":"2024-11-03T23:35:27Z","content_type":"text/html","content_length":"87924","record_id":"<urn:uuid:e5d5cd1d-ecf9-43d2-8edb-a96066c6eed1>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00353.warc.gz"}
|
Mode elimination: taking the phases into account: 3
In this post we look at some of the fundamental problems involved in taking a conditional average over the high-wavenumber modes, while leaving the low-wavenumber modes unaffected.
Let us consider isotropic, stationary turbulence, with a velocity field in wavenumber space which is defined on
Something which may be counter-intuitive for many, is the choice of
The first step in eliminating a band of modes is quite straightforward. We high-pass, and low-pass, filter the velocity field at
where we have adopted a simplified notation. Then we can substitute the decomposition given by equation (2) into the Navier-Stokes equation in wavenumber, and study the effect. However we will not
pursue that here, and further details can be found in Section 5.1.1 of [2]. Instead, we will concentrate here on the following question: how do we average out the effect of the
The condition for such an average can be written as:
where the subscript `
Actually, it would be quite simple to carry out such an average, provided that the velocity field
First, we must satisfy the boundary condition between the two regions of
This is the extreme case, where we would be trying to average out a high-
Secondly, there are some questions about the nature of the averaging over modes, in terms of the averaging of the velocity field in real space. In order to consider this, let us introduce a combined
Fourier transform and filter
Noting that both the Fourier transform and the filter are purely deterministic entities, the average can only act on the real-space velocity field, leading to zero!
So it seems that a simple filtered average, as used in various attempts at subgrid modelling or RG applied to turbulence, cannot be correct at a fundamental level. We will see in the next post how
the introduction of a particular kind of conditional average led to a more satisfactory situation [3].
[1] W. D. McComb. Application of Renormalization Group methods to the subgrid modelling problem. In U. Schumann and R. Friedrich, editors, Direct and Large Eddy Simulation of Turbulence, pages 67-
81. Vieweg, 1986.
[2] W. David McComb. Homogeneous, Isotropic Turbulence: Phenomenology, Renormalization and Statistical Closures. Oxford University Press, 2014.
[3] W. D. McComb, W. Roberts, and A. G. Watt. Conditional-averaging procedure for problems with mode-mode coupling. Phys. Rev. A, 45(6):3507- 3515, 1992.
This site uses Akismet to reduce spam. Learn how your comment data is processed.
|
{"url":"https://blogs.ed.ac.uk/physics-of-turbulence/2023/03/30/mode-elimination-taking-the-phases-into-account-3/","timestamp":"2024-11-02T12:35:05Z","content_type":"text/html","content_length":"71226","record_id":"<urn:uuid:f30511d0-2b66-4dab-823f-3715d282d434>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00158.warc.gz"}
|
Zeros of Dirichlet Series IV
This is the fourth note in a series of notes focused on zeros of Dirichlet series, and in particular on Dirichlet series not in the Selberg class. I will refer to the the first, second, and third
earlier notes in this series. $\DeclareMathOperator{\Re}{Re}$ $\DeclareMathOperator{\Im}{Im}$
Recall that we study Dirichlet series in the extended Selberg class $\widetilde{S}$, which we write as \begin{equation*} L(s) = \sum_{n \geq 1} \frac{a(n)}{n^s}. \end{equation*} Each such Dirichlet
series $L$ has a functional equation of shape $s \mapsto 1 - s$, is assumed to be nontrivial, satisfies a bound of Ramanujan–Petersson type on average, has analytic continuation to an entire function
of finite order, and satisfies a functional equation of the shape \begin{equation*} \Lambda(s) := L(s) Q^s \prod_{\nu = 1}^N \Gamma(\alpha_\nu s + \beta_\nu) = \omega \overline{\Lambda(1 - \overline
{s})}. \end{equation*} It will be convenient to let $\Delta(s) = \prod \Gamma(\alpha_\nu s + \beta_\nu)$ refer to the collected gamma factors. We define the degree of $L(s)$ to be^1 ^1with typical
$L$-functions, this counts the number $\Gamma_\mathbb{R}$ functions in the functional equation (or twice the $\Gamma_\mathbb{C}$ gamma functions).
\begin{equation*} d_L = 2 \sum_{\nu = 1}^N \alpha_\nu. \end{equation*} In principle this note is for general $d_L$, but the primary theorem is for $d_L = 2$, applying for example to Dirichlet series
associated to $\mathrm{GL}(2)$ type modular forms.
And recall that we do not assume that $L$ has an Euler product.
Counting $c$-values
In this note, we will study solutions to the equation $L(s) = c$ for various $c$. We call roots of the equation $L(s) = c$ the $c$-values of $L$, and we'll denote a generic $c$-value by $\rho_c = \
beta_c + i \gamma_c$. If $c = 0$, it is very common to omit the $c$ from this notation, and denote the $0$-values (or zeros) as $\rho = \beta + i \gamma$. (It is also common to write $0$-values as $\
rho = \sigma + it$).
The Riemann Hypothesis predicts that all nontrivial zeros of $\zeta(s)$ lie on the line $\Re s = \tfrac{1}{2}$. Levinson^2 ^2Norman Levinson. Almost all roots of $\zeta(s) = a$ are arbitrarily close
to $\sigma = 1/2$. Proceedings of the National Academy of Sciences, 1975. shows further that all but $O(N(T)(\log \log T)^{-1})$ of the roots to $\zeta(s) = c$ in $T < \mathrm{Im} s < 2T$ lie in
within \begin{equation*} \lvert \Re s - \tfrac{1}{2} \rvert < \frac{(\log \log T)^2}{\log T}. \end{equation*}
Morally, everything of interest occurs right near the critical line.
The primary theorem of this note codifies a similar statement for zeros of Dirichlet series $L$ with degree $d_L = 2$.
Let $L(s) \in \widetilde{S}$ be a Dirichlet series in the extended Selberg class with $a(1) = 1$ and $d_L = 2$. Then for any $\epsilon > 0$, $L(s)$ has $O_\epsilon(T)$ zeros $\rho = \sigma + it$ with
$\lvert \sigma - \tfrac{1}{2} \rvert > \epsilon$. Hence asymptotically, one-hundred percent of the zeros of height up to $T$ lie within $\epsilon$ of the critical line.
The assumption that $a(1) = 1$ here isn't necessary to count $0$-values. With suitable adjustments, the same proofs apply to $\ell(s) := L(s) \frac{m^s}{a(m)}$, where $m$ is the index of the first
non-vanishing coefficient, after noting that zeros of $\ell(s)$ are the same as zeros of $L(s)$. For generic $c$-values, studying $\ell(s)$ doesn't suffice.
As is typical in this sort of proof, we will appeal to a result frequently called Littlewood's Lemma.
Suppose $a < b$ and that $f(s)$ is an analytic function on $\mathcal{R} = \{ s \in \mathbb{C} : a \leq \sigma \leq b, \lvert t \rvert \leq T\}$, where we write $s = \sigma + it$. Suppose that $f$
does not vanish on the right edge $\sigma = b$ of $\mathcal{R}$. Let $\mathcal{R}'$ be $\mathcal{R}$ minus the union of the horizontal cuts from zeros of $f$ in $\mathcal{R}$ to the left edge of $\
mathcal{R}$. We fix a single-valued branch of $\log f(s)$ in the interior of $\mathcal{R}'$. Denote by $\nu(\sigma, T)$ the number of zeros $\rho = \beta + i \gamma$ of $f(s)$ inside the rectangle
with $\beta > \sigma$, including zeros with $\gamma = T$ but not those with $\gamma = -T$. Then \begin{equation*} \int_{\mathcal{R}} \log f(s) ds = - 2 \pi i \int_{a}^b \nu(\sigma, T) d\, \sigma. \
A proof can be found in Titchmarsh's book^3 ^3Edward Charles Titchmarsh and DR Heath-Brown. The theory of the Riemann zeta function. 1986. on $\zeta$. (Many many things can be found in that book). We
give a very abbreviated proof sketch here. Cauchy's Theorem implies that \begin{equation*} \int_{\mathcal{R}'} \log f(s) ds = 0 \end{equation*} as $\log f$ is analytic in this domain. Thus the LHS in
the lemma is $\int_{\mathcal{R}}$ minus the sum of the integrals around the paths of the cuts. The function $\log f(s)$ jumps by $2\pi i$ (or possibly a multiple of this, depending on whether the
zeros are simple or if multiple zeros have the same height — the general proof covers this, but for ease let's suppose this doesn't happen) across these cuts. Then $\int_{\partial \mathcal{R}}$ is $-
2 \pi i$ times the total length of the cuts, which is the RHS.
Let $L \in \widetilde{S}$ with $a(1) = 1$. Fix $c \neq 1$. Then for any $b > \max\{ \tfrac{1}{2}, 1 - \tfrac{1}{d_L} \}$, we have that \begin{equation*} \sum_{\substack{\beta_c > b \\ T < \gamma_c \
leq 2T}} (\beta_c - b) \ll T. \end{equation*} Here the sum is over $c$-values $\beta_c + i \gamma_c$.
We exclude $c = 1$ as $\lim L(s) = 1$ as $\sigma \to \infty$, which makes it more complicated to isolate $1$-values.
As $L(s) \to 1$ as $\sigma \to \infty$, there exists $A = A(c) > 0$ such that $\Re \beta_c < A$ for all real parts $\beta_c$ of $c$-values. Define \begin{equation*} \ell(s) = \frac{L(s) - c}{1 - c}.
\end{equation*} Clearly zeros of $\ell(s)$ correspond to $c$-values of $L(s)$, and it suffices to count zeros of $\ell(s)$. Let $\nu(\sigma, T)$ denote the number of zeros $\rho_c$ of $\ell(s)$ with
$\beta_c > \sigma$ and $T < \gamma_c \leq 2T$ (counting multiplicities).
Choose $a > \max\{A + 2, b\}$ (though we might choose it larger later), and define $\mathcal{R}$ to be the rectangle with vertices $a + iT, a + 2iT, b+2iT, b + iT$. Applying Littlewood's Lemma to $\
ell(s)$ over $\mathcal{R}$ gives \begin{equation*} \int_{\mathcal{R}} \log \ell(s) ds = - 2 \pi i \int_b^a \nu(\sigma, T) d\sigma. \end{equation*} We use $\log(z)$ to agree with the principal branch
of the logarithm in a neighborhood of the bottom-right point of the rectangle, around $a + iT$, and choose values for other points by continuous variation along line segments.^4 ^4The branch doesn't
matter, but this simplifies analysis of changes in argument later. Specifically, this assumption implies that $\arg \ell(\sigma + iT)$ and $\arg\ell(\sigma + 2iT)$ are both approximately $0$. We will
choose $a$ sufficiently large that $\Re(\ell(a + iT)) > 1/2$, so there is no problem choosing the principal branch.
Here, we use $\log(z)$ to agree with the principal branch on the positive real axis for $\Re(z) \gg 1$, and define for other points by continuous variations along line segments.
The RHS is clear. We compute \begin{equation*} \int_b^a \nu(\sigma, T) d \sigma = \sum_{\substack{\beta_c > b \\ T < \gamma \leq 2T}} \int_b^{\beta_c} d\sigma = \sum_{\substack{\beta_c > b \\ T < \
gamma \leq 2T}} (\beta_c - b) \end{equation*} and note this is real-valued. After multiplying by $2 \pi i$, it becomes imaginary-valued, and we can isolate the imaginary part of the integral over $\
mathcal{R}$. Thus we have that \begin{align*} 2 \pi \sum_{\substack{\beta_c > b \\ T < \gamma \leq 2T}} (\beta_c - b) &= \int_T^{2T} \log \lvert \ell(b + it) \rvert dt - \int_T^{2T} \log \lvert \ell
(a + it) \rvert dt \\ &\quad + \int_b^a \arg \ell(\sigma + iT) d\sigma - \int_b^a \arg \ell(\sigma + 2iT) d\sigma. \end{align*} Let's denote these as $I_1, I_2, I_3, I_4$, in order.
Expanding the definition of $\ell(s)$, we see that \begin{equation*} I_1 = \int_T^{2T} \log \lvert L(b + it) - c \rvert dt - T \log \lvert 1 - c \rvert. \end{equation*} Jensen's Inequality (the
concave version in Theorem 3 of the first note in this series) implies the bound \begin{equation*} \frac{1}{2} \int_T^{2T} 2 \log \lvert L(b + it) - c \rvert dt \leq \frac{T}{2} \log \Big( \frac{1}
{T} \int_T^{2T} \lvert L(b + it) \rvert^2 dt \Big). \end{equation*} The integral is bounded above by $O(T)$ by the Lindelöf-on-average result from Corollary 4 of the third note in this series. This
is where we use the assumption that $b > \max\{\frac{1}{2}, 1 - \frac{1}{d_L}\}$. $\ll T$. Adding in the remaining term of size $O(\log T)$, we find that \begin{equation*} I_1 \ll T. \end{equation*}
We now consider $I_2$, the second vertical integral. Morally, for large $a$ (and noting that choosing $a$ larger does not affect the number of $c$-values), $\ell(a + it) \approx 1$. Thus $\log \lvert
\ell(a + it) \rvert \approx 0$, and we should expect $I_2$ to be negligible in size.
We can prove this by rawly expanding the logarithm in Taylor series. As $a > 1$, we have that $$\label{eq:ell_a_small} \ell(a + it) = \frac{L(s) - c}{1 - c} = \frac{1 - c}{1 - c} + \frac{1}{1-c} \
sum_{n \geq 2} \frac{a(n)}{n^{a + it}} = 1 + \frac{1}{1-c} \sum_{n \geq 2} \frac{a(n)}{n^{a + it}}.$$ For $a$ sufficiently large,^5 ^5possibly depending on $c$, but this is okay the absolute value of
the second term can be bounded above by $1/2$, say. Expanding the logarithm gives \begin{equation*} \log \lvert \ell(a + it) \rvert = \Re \sum_{k \geq 1} \frac{(-1)^k}{k(1-c)^k} \sum_{n_1 = 2}^\infty
\cdots \sum_{n_k = 2}^\infty \frac{a(n_1) \cdots a(n_k)}{(n_1 \cdots n_k)^{a + it}}, \end{equation*} implying that \begin{align*} I_2 &= \Re \sum_{k \geq 1} \frac{(-1)^k}{k(1-c)^k} \sum_{n_1 = 2}^\
infty \cdots \sum_{n_k = 2}^\infty \frac{a(n_1) \cdots a(n_k)}{(n_1 \cdots n_k)^{a}}. \int_T^{2T} \frac{dt}{(n_1 \cdots n_k)^{it}} \\ &\ll \sum_{k \geq 1} \frac{1}{k} \Big( \sum_{n \geq 2} \frac{1}{n
^{a - 2 - \epsilon}} \Big)^k \ll 1 \end{align*} for sufficiently large $a$. I note that we've used the trivial bound $\lvert a(n)\rvert \ll n$ here coming from the Ramanujan–Petersson conjecture on
average for the sum. Thus $I_2 \ll 1$ for $a$ sufficiently large.
We now estimate the two horizontal integrals $I_3$ and $I_4$. Identical techniques apply to both. Recall that \begin{equation*} I_3 = \int_b^a \arg \ell(\sigma + iT) d\sigma. \end{equation*} If $\Re
\, \ell(\sigma + iT)$ has $k$ zeros with $b \leq \sigma \leq a$, then we can partition $[b, a]$ into $k + 1$ subintervals on which $\Re\, \ell(\sigma + iT)$ is of constant sign. Note that the
argument cannot change by more than $\pi$ on each subinterval, and thus the net change in argument^6 ^6and thus essentially the maximum value of the argument within the integral, as the argument is $
\approx 0$ at the right endpoint of the integral by our choice of branch of log. is bounded by $(k+1)\pi$.
We now estimate the number $k$ of zeros on the horizontal line segment. To do this, consider the function \begin{equation*} g(z) = \frac{1}{2} \Big( \ell(z + iT) + \overline{\ell(\overline{z} + iT)}
\Big). \end{equation*} Then $g(\sigma) = \Re \ell(\sigma + iT)$, and the number of zeros of $g$ on the interval $[b, a]$ is the same as the number $k$. Note that $g$ is an integral function of order
$1$ since $\ell$ is, and the completely general approach of bounding the number of zeros of integral functions of finite order applies, showing that $k \ll \log T$. For completeness we flesh this
argument out.
Let $R = a - b$. Choose $T$ large enough to that $T > 2R$.^7 ^7Recall that $a$ might be chosen very large for the bounding of $I_2$, but its size is independent of $T$. This implies that the set of
$z$ for which $\lvert z - a \rvert < T$ lies entirely in the upper halfplane. Let $n(r)$ denote the number of zeros of $g(z)$ in $\lvert z - a \rvert \leq r$. We use the trivial integral bounds $$\
label{eq:nR} n(R) \log 2 = n(R) \int_R^{2R} \frac{dr}{r} \leq \int_0^{2R} \frac{n(r)}{r} dt.$$ Using Jensen's Formula (Theorem 4 in the first note), we have that \begin{equation*} \int_0^{2R} \frac{n
(r)}{r} dr = \frac{1}{2\pi} \int_0^{2\pi} \log \lvert g(a + 2Re^{i\theta}) \rvert - \log \lvert g(a) \rvert. \end{equation*}
By the Taylor expansion~\eqref{eq:ell_a_small} (and our choice of $a$ large), we see that $\log \lvert g(a) \rvert$ is bounded by a constant. The convexity bound for $\ell(s)$ (explicitly given in
Theorem 7 from the first note, though simply knowing that there is a polynomial bound suffices) implies that $\log \lvert g(a + 2Re^{i\theta}) \rvert \ll \log T$, hence $n(R) \ll \log T$.
As the interval $[b, a]$ is contained in the disk $\lvert z - a \rvert \leq R$, we have that $k \leq n(R) = O(\log T)$, and thus $I_3 = O(\log T)$. The same bound applies for $I_4$. Combining these
four bounds completes the proof.
The proof of this theorem ends up bounding the size of four integrals. These integrals were
1. A vertical integral near the critical strip. To bound it, we used a Lindelöf-on-average type result.
2. A vertical integral far to the right, well within the region of absolute convergence. To bound it, we expanded the integrand in a series and naively bounded.
3. Two horizontal integrals with large (and larger) imaginary part. To bound them, we showed that they were bounded in practice by the number of zeros in thin horizontal strips, for which there are
no more than $O(\log T)$ for fundamental growth reasons.
It is possible to change the left side of the rectangle and choose it instead to be far to the left, analogous to how the right hand side was chosen far to the right. Instead of appealing to a
Lindelöf-on-average result to bound it, we could instead use the functional equation, Stirling's series to asymptotically estimate the gamma functions, and naive expansion for the Dirichlet series
As $L(\sigma + it) \to 1$ as $\sigma \to 1$, for a given $c \neq 1$, there are positive constants $\tau, B$ such that there are no $c$-values in the quarter-plane $t > \tau$, $\sigma < -B$. Choose $b
< -B - 2$ and $T > \tau + 1$ (though as before we also want $b$ sufficiently negative so that the Dirichlet series, after applying the functional equation, is well within its region of absolute
convergence; and we then only consider $T$ larger than $a - b$).
Write the functional equation in the form $L(s) = \gamma(s) \overline{L(1 - \overline{s})}$. Then \begin{equation*} \log \lvert L(s) - c \rvert = \log \lvert \gamma(s) \rvert + \log \lvert \overline
{L(1 - \overline{s})} \rvert + \log\left( \bigg \lvert 1 - \frac{c}{\lvert \gamma(s) \overline{L(1 - \overline{s})} \rvert} \bigg \rvert \right). \end{equation*} An explicit Stirling approximation
shows that \begin{equation*} \log \lvert \gamma(s) \rvert = (\tfrac{1}{2} - \sigma) \big(d_L \log t +\log( \alpha Q^2) \big) + O(\tfrac{1}{t}) \end{equation*} for $\lvert t \rvert > 1$, $\sigma$
restricted to a fixed interval. Here, $\alpha$ is as in Lemma 6 of the first note, giving Stirling's approximation for the gamma factor $\Delta(s)$, \begin{equation*} \alpha = \prod_{\nu = 1}^N \
alpha_\nu^{2 \alpha_\nu}. \end{equation*}
Choosing $b$ sufficiently large and negative, for any $t \geq T$ we have that $L(1 - (b + it)) \approx 1$ and $\lvert \delta(b + it) \rvert \gg 1$ (in fact it behaves like $t^{d_L(\tfrac{1}{2} - b)}$
by Stirling's approximation). Thus \begin{equation*} \log\Big( 1 - \frac{c}{\Delta(s) \overline{L(1 - \overline{s})}} \Big) = O\Big( \frac{1}{\lvert \Delta(s) L(1 - s)\rvert} \Big) = O\big(\frac{1}
{t}\big), \end{equation*} where the last approximation is very lossy, but sufficient.
The integral from this shifted left side of the rectangle, from $b + iT$ to $b + 2iT$, can thus be written \begin{align*} \int_T^{2T} \log \big\lvert L(b + it) - c \big\rvert dt &= (\tfrac{1}{2} - b)
\int_T^{2T} \big(d_L \log t + \log(\alpha Q^2)\big) dt \\ &\quad+\int_T^{2T} \log \lvert L(1 - b - it) \rvert dt + O(\log T). \end{align*} The first integral is completely explicit and can be
directly computed.^8 ^8A similar counting problem, though with slightly different methods, is the basic counting problem for zeros of $L$-functions in the Selberg class. See for example Theorem 5.8
of Iwaniec and Kowalski's Analytic Number Theory, or indeed almost any treatment of the zeros of the zeta function for similar estimates. The second integral is small if $-b$ is sufficiently large
for precisely the same reason that~\eqref{eq:ell_a_small} is small in the proof of the previous theorem.
In total, we compute that \begin{align*} \int_T^{2T} &\log \lvert \ell(b + it) \rvert dt = \int_T^{2T} \log \big\lvert L(b + it) - c \big\rvert dt - T \log \big\lvert 1 - c \big\rvert \\ &= (\tfrac
{1}{2} - b) \big( d_L T \log \tfrac{4T}{e} + T \log(\alpha Q^2) \big) - T \log \lvert 1 - c \rvert + O(\log T). \end{align*} Choosing now $\mathcal{R}$ to be the rectangle with corners $b + iT, a +
iT, a + 2iT, b + 2iT$ with this choice of $a, b, T$, using this computation for the integral along the left vertical line segment, and applying the same techniques to bound $I_2$, $I_3$, and $I_4$
from Theorem 4 proves the following result.
Let $L \in \widetilde{S}$ with $a(1) = 1$. Let $c \neq 1$. Then for large negative $b$, \begin{align*} 2 \pi \sum_{{T < \gamma_c \leq 2T}} (\beta_c - b) &= (\tfrac{1}{2} - b) \big( d_L T \log \frac
{4T}{e} + T \log(\alpha Q^2) \big) \\ &\quad- T \log \lvert 1 - c \rvert + O(\log T), \end{align*} where $\alpha = \prod \alpha_\nu^{2 \alpha_\nu}$. The sum is over all $c$-values $\rho_c = \beta_c +
i \gamma_c$ with $T < \gamma_c \leq 2T$.
Unweighting $c$-values
The results in Theorem 4 and Theorem 5 count $c$-values weighted by the distance between their real parts and a fixed line. By choosing two different lines and combining the weights, we can obtain an
unweighted count of $c$-values.
For $c \neq 1$, let $N^c(T)$ denote the number of $c$-values of $L(s)$ (for $L(s)$ with $a(1) = 1$) with $T < \gamma_c \leq 2T$. Subtracting the primary asymptotic from Theorem 5 with $b + 1$ in
place of $b$, from the asymptotic with $b$, counts \begin{equation*} \sum_{T < \gamma_c < 2T} (\beta_c - b) - \sum_{T < \gamma_c < 2T} (\beta_c - b - 1) = \sum_{T < \gamma_c \leq 2T} 1 = N^c(T). \end
{equation*} As a simple corollary to Theorem 5, we have proved the following.
Let $L \in \widetilde{S}$ with $a(1) = 1$. Let $c \neq 1$. Then \begin{equation*} N^c(T) = \frac{d_L}{2\pi} T \log \frac{4T}{e} + \frac{T}{2\pi} \log (\alpha Q^2) + O(\log T). \end{equation*}
Choosing $c = 0$ gives the standard zero-counting theorems. It is sometimes common to see the logarithmic main terms combined.
Almost all zeros are near the line
Let us now specialize exactly to Dirichlet series having degree $d_L = 2$, such as those coming from half-integral weight modular forms (or full-integral weight modular forms). We let $N(T) = N^0(T)$
count the number of zeros with imaginary part between $T$ and $2T$.
Then on the one hand, Corollary 6 shows that \begin{equation*} N(T) = \frac{1}{\pi} T \log \frac{4T}{e} + \frac{T}{2\pi} \log (\alpha Q^2) + O(\log T). \end{equation*}
We now count zeros to the right of the critical line. Define \begin{equation*} N^+(\sigma, T) = \# \{ \rho_c : T < \gamma_c \leq 2T, \beta_c > \sigma\}. \end{equation*} Let $\sigma > \max\{ \tfrac{1}
{2}, 1 - \frac{1}{d_L} \} = \frac{1}{2}$, and fix any $\sigma^* \in (\frac{1}{2}, \sigma)$. Then \begin{equation*} N^+(\sigma, T) \leq \frac{1}{\sigma - \sigma^*} \sum_{\substack{\beta_c > \sigma \\
T < \gamma_c \leq 2T}} (\beta_c - \sigma^*). \end{equation*} By Theorem 4, this is bounded by $O(T)$.
Thus there are on the order of $T \log T$ zeros in total, but only $O(T)$ zeros $\rho$ with $\Re \rho > \sigma > \frac{1}{2}$ for any fixed $\sigma > \frac{1}{2}$. The functional equation implies
that nontrivial zeros are symmetric about the half-line, implying that there are at most $O(T)$ zeros of distance greater than $\sigma - \frac{1}{2}$ from the critical line, and on the order of $T \
log T$ zeros within $\sigma - \frac{1}{2}$ of the critical line.
Choosing $\sigma = \frac{1}{2} + \epsilon$ for any $\epsilon > 0$ proves the following.
Let $L(s, f)$ be the Dirichlet series associated to a cuspidal half-integral weight modular form with $a(1) = 1$. Then for any $\epsilon > 0$, $L(s, f)$ has $O_\epsilon(T)$ zeroes $\rho = \sigma +
it$ with $\lvert \sigma - \frac{1}{2} \rvert > \epsilon$.
Asymptotically, one-hundred percent of zeros of $L(s, f)$ occur within $\epsilon$ of the critical line.
For general degree, we have the following theorem.
Let $L(s)$ be a Dirichlet series in the extended Selberg class with $a(1) = 1$. Let $\sigma_0 = \max\{ \frac{1}{2}, 1 - \frac{1}{d_L} \}$. For any $\epsilon > 0$, $L(s)$ has $O_\epsilon(T)$ zeros $\
rho = \sigma + it$ with $\lvert \sigma - \sigma_0 \rvert > \epsilon$.
In the proof presented here, the primary obstruction for higher degree Dirichlet series are Lindelöf-on-average results (as in Corollary 4 of the third note in this series) or improved subconvexity
results. Specifically the primary obstruction is bounding the integral $I_1$ in the proof of Theorem 4 on suitable lines $b$.
Info on how to comment
To make a comment, please send an email using the button below. Your email address won't be shared (unless you include it in the body of your comment). If you don't want your real name to be used
next to your comment, please specify the name you would like to use. If you want your name to link to a particular url, include that as well.
bold, italics, and plain text are allowed in comments. A reasonable subset of markdown is supported, including lists, links, and fenced code blocks. In addition, math can be formatted using $(inline
math)$ or $$(your display equation)$$.
Please use plaintext email when commenting. See Plaintext Email and Comments on this site for more. Note also that comments are expected to be open, considerate, and respectful.
Comments (2)
1. 2024-03-05 Chris
Have you published this yet? There is a typo in the second display equation. It should have a $\Lambda$ instead of a $\lambda$.
2. 2024-03-06 DLD
Thank you! I've fixed the typo.
I haven't published these yet. I'm going to try to put this in a publishable form in the next couple of months.
|
{"url":"https://davidlowryduda.com/zeros-of-dirichlet-series-iv/","timestamp":"2024-11-14T04:21:57Z","content_type":"text/html","content_length":"30089","record_id":"<urn:uuid:de2cd5c2-3325-4d2b-a3ed-94122a901ead>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00568.warc.gz"}
|
How To Explain Math Answers
Master your hardest subject and understand all the answers. Math does not deal with abstracts in the same way English, philosophy or other liberal arts studies do. So when trying to explain answers
to math problems, there are various methods you will be able to use to correctly demonstrate how you arrived at the answer. Plug in your answer, work backwards, explain in terms of something else or
use an online resource to explain math answers to a student or child struggling with a math problem.
Step 1
Work backwards. Math answers can be proven by working backwards. If a student cannot understand why 12 multiplied by 4 is 48, try to explain it by showing how 48 divided by 4 equals 12, or how 48
divided by 12 equals 4. Students comprehend better when they see symmetry in math.
Step 2
Explain in terms of something else. Difficult math concepts and solutions can often be explained using an easier version of the same concept. For example, when explaining percentages explain in terms
of numbers easy to understand, such as 10% of 100 as opposed to 6% of 47.
Step 3
Check your answer. Many math equations can be proven simply by checking your answer. For example, if you arrive at the solution x=3 for 2x +2 = 8, simply plug 3 in for the variable to determine if
the answer is correct.
Step 4
Use an online resource. Various web sites offer free step-by-step solutions to explain how an answer was attained for hard-to-explain concepts. Visit webmath.com, choose a Math Help item and begin
having your answers explained.
Cite This Article
Gianino, Laura. "How To Explain Math Answers" sciencing.com, https://www.sciencing.com/explain-math-answers-8055463/. 24 April 2017.
Gianino, Laura. (2017, April 24). How To Explain Math Answers. sciencing.com. Retrieved from https://www.sciencing.com/explain-math-answers-8055463/
Gianino, Laura. How To Explain Math Answers last modified August 30, 2022. https://www.sciencing.com/explain-math-answers-8055463/
|
{"url":"https://www.sciencing.com:443/explain-math-answers-8055463/","timestamp":"2024-11-08T21:56:39Z","content_type":"application/xhtml+xml","content_length":"69847","record_id":"<urn:uuid:fda5b02f-f72b-448f-8a94-0a6d5dbaf61f>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00388.warc.gz"}
|
Talk: Symbolic evaluation of log-sine integrals in polylogarithmic terms (JMM)
Symbolic evaluation of log-sine integrals in polylogarithmic terms (JMM)
Date: 2012/01/07
Occasion: AMS Joint Meetings 2012
Place: Boston
This talk, given at the AMS Joint Meetings 2012 in Boston, basically is a short version of the talk given at ISSAC 2011 and presents results of the paper Special values of generalized log-sine
integrals together with a brief indication of two applications of log-sine integrals (Mahler measure and inverse binomial sums).
Generalized log-sine integrals, first studied systematically by Lewin 50 years ago, appear in many settings in number theory and analysis: for instance, they can be used to express classes of inverse
binomial sums. As such they have reappeared in recent work on the epsilon-expansion of Feynman diagrams in physics; they have also proved useful in the study of certain multiple Mahler measures. We
sketch these developments and present results which allow for the symbolic computation of log-sine integrals in terms of Nielsen polylogarithms at related argument. In particular, log-sine integrals
at pi/3 are shown to evaluate in terms of polylogarithms at the sixth root of unity.
|
{"url":"http://arminstraub.com/talk/logsin-jmm","timestamp":"2024-11-13T21:49:22Z","content_type":"text/html","content_length":"4538","record_id":"<urn:uuid:e8097579-29d1-4cda-9d1c-f5c5787d3214>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00433.warc.gz"}
|
Transactions Online
Jonathan LETESSIER, Baptiste VRIGNEAU, Philippe ROSTAING, Gilles BUREL, "New Closed-Form of the Largest Eigenvalue PDF for Max-SNR MIMO System Performances" in IEICE TRANSACTIONS on Fundamentals,
vol. E91-A, no. 7, pp. 1791-1796, July 2008, doi: 10.1093/ietfec/e91-a.7.1791.
Abstract: Multiple-input multiple-output (MIMO) maximum-SNR (max-SNR) system employs the maximum ratio combiner (MRC) at the receiver side and the maximum ratio transmitter (MRT) at the transmitter
side. Its performances highly depend on MIMO channel characteristics, which vary according to both the number of antennas and their distribution between the transmitter and receiver sides. By using
the decomposition of the ordered Wishart distribution in the uncorrelated Rayleigh case, we derived a closed-form expression of the largest eigenvalue probability density function (PDF). The final
result yields to an expression form of the PDF where polynomials are multiplied by exponentials; it is worth underlining that, though this form had been previously observed for given couples of
antennas, to date no formally-written closed-form was available in the literature for an arbitrary couple. Then, this new expression permits one to quickly and easily get the well known largest
eigenvalue PDF and use it to determine the binary error probability (BEP) of the max-SNR.
URL: https://global.ieice.org/en_transactions/fundamentals/10.1093/ietfec/e91-a.7.1791/_p
author={Jonathan LETESSIER, Baptiste VRIGNEAU, Philippe ROSTAING, Gilles BUREL, },
journal={IEICE TRANSACTIONS on Fundamentals},
title={New Closed-Form of the Largest Eigenvalue PDF for Max-SNR MIMO System Performances},
abstract={Multiple-input multiple-output (MIMO) maximum-SNR (max-SNR) system employs the maximum ratio combiner (MRC) at the receiver side and the maximum ratio transmitter (MRT) at the transmitter
side. Its performances highly depend on MIMO channel characteristics, which vary according to both the number of antennas and their distribution between the transmitter and receiver sides. By using
the decomposition of the ordered Wishart distribution in the uncorrelated Rayleigh case, we derived a closed-form expression of the largest eigenvalue probability density function (PDF). The final
result yields to an expression form of the PDF where polynomials are multiplied by exponentials; it is worth underlining that, though this form had been previously observed for given couples of
antennas, to date no formally-written closed-form was available in the literature for an arbitrary couple. Then, this new expression permits one to quickly and easily get the well known largest
eigenvalue PDF and use it to determine the binary error probability (BEP) of the max-SNR.},
TY - JOUR
TI - New Closed-Form of the Largest Eigenvalue PDF for Max-SNR MIMO System Performances
T2 - IEICE TRANSACTIONS on Fundamentals
SP - 1791
EP - 1796
AU - Jonathan LETESSIER
AU - Baptiste VRIGNEAU
AU - Philippe ROSTAING
AU - Gilles BUREL
PY - 2008
DO - 10.1093/ietfec/e91-a.7.1791
JO - IEICE TRANSACTIONS on Fundamentals
SN - 1745-1337
VL - E91-A
IS - 7
JA - IEICE TRANSACTIONS on Fundamentals
Y1 - July 2008
AB - Multiple-input multiple-output (MIMO) maximum-SNR (max-SNR) system employs the maximum ratio combiner (MRC) at the receiver side and the maximum ratio transmitter (MRT) at the transmitter side.
Its performances highly depend on MIMO channel characteristics, which vary according to both the number of antennas and their distribution between the transmitter and receiver sides. By using the
decomposition of the ordered Wishart distribution in the uncorrelated Rayleigh case, we derived a closed-form expression of the largest eigenvalue probability density function (PDF). The final result
yields to an expression form of the PDF where polynomials are multiplied by exponentials; it is worth underlining that, though this form had been previously observed for given couples of antennas, to
date no formally-written closed-form was available in the literature for an arbitrary couple. Then, this new expression permits one to quickly and easily get the well known largest eigenvalue PDF and
use it to determine the binary error probability (BEP) of the max-SNR.
ER -
|
{"url":"https://global.ieice.org/en_transactions/fundamentals/10.1093/ietfec/e91-a.7.1791/_p","timestamp":"2024-11-04T23:57:40Z","content_type":"text/html","content_length":"62346","record_id":"<urn:uuid:9b0fdd2f-d2d0-4597-88e2-b20e119b1d61>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00750.warc.gz"}
|
Pros and Cons of Common Core Standards - TutorsploitPros and Cons of Common Core Standards - Tutorsploit
Pros and Cons of Common Core Standards
The Common Core Standards have generated a lot of discussion in the last decade. Some people think they are too rigorous, and others want to keep them because they believe it will make students more
competitive. But do these standards benefit students? What are the pros and cons of Common Core Standards?
Common Core is a set of educational goals for mathematics and English language arts, as well as other subjects like social studies, science, art, music, foreign languages (French or Spanish), health
education (physical education) that can be applied at all grade levels from kindergarten through 12th grade in public schools across the US. We’ll explore about CCSS below, but if you wish to skip
this guide, our professional writers for hire are ready to cover you.
Common Core was created by the National Governors Association Center for Best Practices and Council of Chief State School Officers.
In 2009, a grant from the Bill & Melinda Gates Foundation provided funding for refining the standards to ensure that they are rigorous enough to prepare every student for success in college and
careers. In 2010, many states started adopting Common Core Standards and preparing students for college-level work. These newly adopted standards were aligned with state educational content and
performance standards.
Since most states have agreed to implement the Common Core State Standards, thousands of teachers in US schools are now under immense pressure to make sure that each student is ready by the time they
Advocates for Common Core argue that the new standards are more demanding and will better prepare students for college. The critics of the new standards think that they limit creativity in classrooms
and define what “good” writing looks like. They also say that some of the new standards are virtually impossible to reach within a reasonable amount of time (because of the amount of information to
be learned and assessed). Many state governors are also unhappy with the new standards because they had to cut some existing educational programs to fund Common Core.
This article will explore both sides of the argument and list out some pros and cons of these standards.
Common core standards are more rigorous and require students to learn a lot. But is this going to make the US educational system better? Here are some of the pros of Common Core Standards:
Enhances critical thinking skills, deeper learning and encourages in-depth research on a topic. According to one study conducted among teachers, it was revealed that education students who were
taught under the new standards scored higher (58%) in a critical thinking test than those in states that were not using Common Core (21%).
Gives teachers a better understanding of what they need to teach and also encourages collaboration. Teachers feel more confident because they know exactly what is expected from them and their
students. When there is total alignment between standards, curriculum, and tests, teachers don’t have to worry about students going in different directions.
Helps students pre-plan their writing assignments and understand where they are headed with writing.
Before the Common Core standards were implemented in most schools, many students would just start an essay without knowing what they are supposed to write about.
The standards of Common Core now encourage students to start a paper by considering the question or topic they are given and plan their writing. This will help them develop writing and critical
thinking skills over time and eventually prepare them for college-level work.
4. In depth Understanding of Literature
Helps students develop a more in-depth understanding of history and literature. The new standards are based on evidence, which essentially means that students need to think about what information
they have read or learned before writing an assignment.
For instance, if a student wants to write a history paper about Abraham Lincoln, they will first need to consider the information they already know about him and then use evidence from their research
to support it. In literature, students will need to produce strong arguments supported by facts and details rather than just their own opinionated views.
Common core standards ensure the United States competes favorably internationally since they were set after a series of international benchmarking. Many say that the new standards may not be perfect,
but they are definitely an improvement from the previous ones.
The new standards will help ensure some schools are stable with the way they are implemented. This is because there will be a uniformity of standards in most states that have accepted Common Core,
making it easier for teachers to teach and students to learn.
The Common Core Standards are more rigorous, which means kids who make it through these standards will be better prepared for college.
Common Core Standards have been adopted by over 40 states and are now being implemented in schools, albeit with some criticism.
While these standards may or may not be perfect, a lot of educators say that they are better than the previous ones used in most secondary schools.
Some teachers say that certain standards in the Common Core are not yet developed enough to be implemented in schools.
Here are some of the cons of Common Core Standards:
There is a lot of confusion about the Common Core since over 40 states have adopted it, but each state can interpret them differently. This means that the Common Core Standards are not being
implemented in a uniform fashion across the country, which can result in confusing the student and the education system.
The new standards may force students to memorize information without developing critical thinking and writing skills. For instance, math equations will need to be learned by heart instead of
understanding why they work the way they do. This will make it harder for students to apply the math equations they learn in real life.
There is a difficulty in transition for students and teachers to the new set of standards. This is because both teachers and students will have to get accustomed to the new way of teaching. Changes
may also need to be made if the old curriculum or textbooks are still being used or if teachers are not trained in the Common Core material yet.
In some states, the standards have increased rigor in school work and are seen to be too difficult. However, some teachers say it is just a matter of getting used to the new way of teaching.
Students with special needs may find the new standards difficult to follow because they have not been well thought out. Furthermore, some parents and teachers say that the standards are not
appropriate for students requiring special attention.
Implementing new Common Core Standards has forced states to spend more money on textbooks and other learning materials. This is because they are now being updated to match the new curriculum in use
in schools. However, this will mean that education will be more expensive in places where textbooks are being bought.
Some teachers argue that the common score standards are too vague with no specifics on implementing them. Also, it can be hard for students to understand how exactly standardized tests should be
scored. This makes it harder for teachers to do their job in the classroom.
The standards might also be less rigorous when compared to other previous standards. This is because the previous standards had more specifics on what students should learn, making it easier to
evaluate how well they have learned. However, this may change as Common Core Standards are being revised by panels of educators.
Some critics say that the focus will be now standardized test scores instead of teaching to a diverse group of students. This is because there are many standardized test scores, making it easier to
track how well students are learning and compare results with other schools.
The Common Core State Standards (CCSS) have been a hot topic of debate for years now, and it’s not clear whether or not they are beneficial. Weighing the pros against the cons can help you make your
decision on if CCSS is something that will be helpful to implement in your school district.
|
{"url":"http://www.tutorsploit.com/ccss/pros-and-cons-of-common-core-standards/","timestamp":"2024-11-06T07:29:14Z","content_type":"text/html","content_length":"150053","record_id":"<urn:uuid:00603ad3-474c-4425-a9d9-a490f46cc5d3>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00384.warc.gz"}
|
Household daily-peak electricity load forecasting with statistical models
Sidst opdateret
8 år siden
Creative Commons CC BY 4.0
This article proposes to obtain a statistical model of the daily peak electricity load of a household located in Austin-TX,USA. The Box-Jenkins methodology was followed to obtain the best fit for the
time-series. Four models provided a good fit: ARIMA(0,1,2), ARIMA(1,1,2), SARIMA(0,1,2)(0,1,1) and SARIMA(1,1,2)(0,1,1). The model with the highest Akaike Information Criteria was the ARIMA(1,2,2).
However, the model with the highest forecast accuracy was the SARIMA(1,1,2)(0,1,1), which obtained an RMSE of 0.296 and a MAPE Of 15.00.
|
{"url":"https://da.overleaf.com/articles/household-daily-peak-electricity-load-forecasting-with-statistical-models/cbykthhbhzkz","timestamp":"2024-11-08T01:55:32Z","content_type":"text/html","content_length":"57600","record_id":"<urn:uuid:e8df094f-c62e-416b-847f-09d5cba47cea>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00362.warc.gz"}
|
Dimensional Regularization in Position Space, and a Forest Formula for Epstein-Glaser Renormalization
Dimensional Regularization in Position Space, and a Forest Formula for Epstein-Glaser Renormalization
Michael Dütsch
Klaus Fredenhagen
Kai Johannes Keller
Katarzyna Rejzner
November 21, 2013
We reformulate dimensional regularization as a regularization method in position space and show that it can be used to give a closed expression for the renormalized time-ordered products as solutions
to the induction scheme of Epstein-Glaser. For scalar fields the resulting renormalization method is always applicable, we compute several examples. We also analyze the Hopf algebraic aspects of the
combinatorics. Our starting point is the Main Theorem of Renormalization of Stora and Popineau and the arising renormalization group as originally defined by Stueckelberg and Petermann.
|
{"url":"https://www.lqp2.org/node/1046","timestamp":"2024-11-08T02:45:08Z","content_type":"text/html","content_length":"16473","record_id":"<urn:uuid:764ce1c6-4b65-4d6e-9b0c-2a96a8abce42>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00087.warc.gz"}
|
Shortcut to divide any 2 digit number by 9
Shortcut to divide any 2 digit number by 9
Arjun said on : 2018-04-16 06:30:49
Dividing a two digit number by 9 using the method taught to us in our school days may take a minute. But by applying this trick you can mentally calculate the answer within seconds.
Let us see the steps now
Step 1: Enter the ten’s digit of the number as it is.
Step 2: To get the unit’s place digit add the ten’s place digit number and Unit’s place digit number and put the decimal point after the ten’s place digit.
Let us consider an example
Example: 23÷ 9=?
Step 1: We enter the ten’s place digit as it is
Ten’s place digit=2
Step 2: We get the unit’s place digit by adding ten’s place digit and unit’s place digit
Unit’s place digit=5
Now add a decimal point between the unit’s place digit and the ten’s place digit to arrive at the answer
Ans : 23÷ 9 =2.5
!! OOPS Login [Click here] is required for more results / answer
Help other students, write article, leave your comments
|
{"url":"https://engineeringslab.com/tutorial_vedic_quicker_shortcut_math_tricks/shortcut-to-divide-any-2-digit-number-by-9-25.htm","timestamp":"2024-11-02T04:23:39Z","content_type":"text/html","content_length":"38238","record_id":"<urn:uuid:d6d040f1-e735-447d-8891-0efe73faf771>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00739.warc.gz"}
|
newRPL - build 1255 released! [updated to 1299]
12-09-2018, 02:38 AM
Post: #321
Claudio L. Posts: 1,885
Senior Member Joined: Dec 2013
RE: newRPL - build 1089 released! [update:build 1127]
(12-07-2018 11:52 PM)compsystems Wrote: The problem is that I have not read the entire manual, soo had tried some combinations and did not get any output, I deduced that it had not been
Looking for other combinations. I only find the options between [M] and [Enter]
RS_hold + [spc] = ;
RS + [spc] = ,
LS_hold + [0] = INF ;
LS + [spc] = ¯inf,
OK, the wiki is not very good yet, so I forgive you
Many keys use hold-shift. Most of them are set to do the same for shift and shift-hold because when typing fast, sometimes you press the key before you release the shift, but many are doing different
functions out of necessity.
12-17-2018, 02:08 PM
Post: #322
Claudio L. Posts: 1,885
Senior Member Joined: Dec 2013
RE: newRPL - build 1089 released! [update:build 1127]
All ROMs and Android app updated to build 1140 at the usual place.
This rom has as a main feature the new symbolic rules engine, which will form the basis of most CAS commands, allowing to reach a much closer match to the original 50g.
Most old CAS commands should be able to be coded simply as applying a set of rules to the input.
The wiki now has a section
explaining how to use the new rules engine.
The AUTOSIMPLIFY command is the only one for now that uses a few rules, more will come soon.
Here's a simple teaser experiment, implementing a function DER(f(u),u) that computes derivatives of polynomial functions:
Simply put your symbolic polynomial expression as 'DER(3*X^3-2*X^2+7*X+0,X)', place the list above in the stack and run RULEAPPLY.
You may need an AUTOSIMPLIFY at the end to do some additional cleanup.
Please test and report any issues you find on this new engine.
12-17-2018, 05:13 PM
Post: #323
The Shadow Posts: 233
Member Joined: Jan 2014
RE: newRPL - build 1089 released! [update:build 1127]
Very nice! I suppose if you wanted to expand DER to handle SIN and COS, you'd have to use RULEAPPLY1, right? Otherwise you'd get an infinite loop.
Also, to implement the chain rule on that version of DER, you'd need separate rules applying it to SIN of a function, COS of a function, and a function to a power, correct?
What rules does AUTOSIMPLIFY apply?
12-17-2018, 05:46 PM
Post: #324
Claudio L. Posts: 1,885
Senior Member Joined: Dec 2013
RE: newRPL - build 1089 released! [update:build 1127]
(12-17-2018 05:13 PM)The Shadow Wrote: Very nice! I suppose if you wanted to expand DER to handle SIN and COS, you'd have to use RULEAPPLY1, right? Otherwise you'd get an infinite loop.
Also, to implement the chain rule on that version of DER, you'd need separate rules applying it to SIN of a function, COS of a function, and a function to a power, correct?
What rules does AUTOSIMPLIFY apply?
You don't need to worry about infinite recursion, the rule would be:
Since the COS is not inside DER(...) anymore, there's no issue with recursion. You do need independent rules for SIN, COS, and many others (this was just a humble example of what's possible, wasn't
meant to be a proper derivative implementation).
By including generic functions '.xU' in the rules, rather than individual variables, the chain rule is already implemented. The next pass of RULESAPPLY will operate on the new DER(...) term until you
end up with DER(constant,x)=0 or DER(x,x)=1.
This simplified example is lacking some important things, for example I did not include a generic multiplication rule:
'DER(.xU*.XV,.xDW) = DER(.xU,.xDW)*.XV + .xU * DER(.XV,.xDW)'
in case the polynomial is factored, for example. This will take the first multiplicand ('.xU' would match one term in a multiplication, whatever subexpression it may contain) and isolate it from the
rest of the expression ('.XV' will match all other multiplicands grouped together), and work its way recursively until all multiplicands have been separated.
12-17-2018, 06:05 PM
Post: #325
Claudio L. Posts: 1,885
Senior Member Joined: Dec 2013
RE: newRPL - build 1089 released! [update:build 1127]
(12-17-2018 05:13 PM)The Shadow Wrote: What rules does AUTOSIMPLIFY apply?
AUTOSIMPLIFY does first a numeric reduction (multiplies numbers together, adds fractions, etc.), then applies a series of rules. The idea is to have several sets of rules and a flag that sets the
simplification level the user wants. For now I only created level1, the most basic things.
Quoted from lib-56.nrpl source code:
@#name lib56_autosimplify_level1
'0+.XX:→.XX' @ REMOVE TERMS WITH ZERO
'INV(1):→1' @ ELIMINATE OPERATION ON ONE
'1*.XX:→.XX' @ REMOVE MULTIPLY BY ONE
'.XX^1:→.XX' @ REMOVE EXPONENT OF 1
'√.XX:→.XX^INV(2)' @ ELIMINATE SQUARE ROOT FOR OTHER RULES TO WORK
'.MN*.mX+.MM*.mX:→(.MN+.MM)*.mX' @ ASSOCIATE TO THE LEFT (NON-COMMUTATIVE)
'.mX*.MN+.mX*.MM:→.mX*(.MN+.MM)' @ ASSOCIATE TO THE RIGHT (NON-COMMUTATIVE)
'.NN*.xX^.Nexp+.NM*.xX^.Nexp:→(.NN+.NM)*.xX^.Nexp' @ ADD TERMS IN THE SAME VARIABLE AS LONG AS THE REST IS NUMERIC
'.xX^.NN*INV(.xX^.NM):→.xX^(.NN-.NM)' @ CANCEL OUT TERMS WITH EXPONENTS
'.xX^.NN*INV(.xX):→.xX^(.NN-1)' @ CANCEL OUT TERMS WITHOUT EXPONENT IN DENOMINATOR
'.xX*INV(.xX^.NM):→.xX^(1-.NM)' @ CANCEL OUT TERMS WITHOUT EXPONENT IN NUMERATOR
'.xX^.NN*.xX^.NM:→.xX^(.NN+.NM)' @ ADD EXPONENTS IN MULTIPLYING TERMS
'.mX*.mX^.NM:→.mX^(1+.NM)' @ ADD EXPONENTS WITH IMPLICIT EXPONENT 1
'.XX^INV(2):→√.XX' @ BACK TO SQUARE ROOTS
Yes, all symbolic rules will be coded in RPL, so they are easy to maintain and improve by the community.
12-17-2018, 06:24 PM
Post: #326
The Shadow Posts: 233
Member Joined: Jan 2014
RE: newRPL - build 1089 released! [update:build 1127]
Ah! I get it now, thanks! A full derivative operator shouldn't be too hard to implement at that rate. And a symbolic complex number engine would be downright *easy*. (Though I'm not clear yet whether
it would be faster than my matrix implementation - I'm guessing it would be.)
As an added bonus, my somewhat kludgey Boolean simplification program from oldRPL will be incredibly easy to make.
12-17-2018, 07:31 PM
Post: #327
cdmackay Posts: 780
Senior Member Joined: Sep 2018
RE: newRPL - build 1089 released! [update:build 1127]
thanks for the new version.
quick question, please: when viewing command-completion suggestions with AL-DN, shouldn't they be in alphabetical order? They don't seem to be…
Cambridge, UK
41CL/DM41X 12/15C/16C DM15/16 17B/II/II+ 28S 42S/DM42 32SII 48GX 50g 35s WP34S PrimeG2 WP43S/pilot/C47
Casio, Rockwell 18R
12-17-2018, 09:38 PM
(This post was last modified: 12-17-2018 10:05 PM by The Shadow.)
Post: #328
The Shadow Posts: 233
Member Joined: Jan 2014
RE: newRPL - build 1089 released! [update:build 1127]
Hmm. AUTOSIMPLIFY needs some work yet. It doesn't yet know how to multiply singleton variables. ie, it can't change 'X^3*X' to 'X^4'. It also can't change 'X+X' to '2*X'.
It can't change '0*X' to 0, but I suppose that might be because X might be infinite or a matrix?
It also can't handle powers of powers, but I suppose that's probably because of the headaches involving even powers? ie, sqrt(X^2) should strictly speaking be abs(X), not X.
12-17-2018, 09:47 PM
Post: #329
Claudio L. Posts: 1,885
Senior Member Joined: Dec 2013
RE: newRPL - build 1089 released! [update:build 1127]
(12-17-2018 07:31 PM)cdmackay Wrote: thanks for the new version.
quick question, please: when viewing command-completion suggestions with AL-DN, shouldn't they be in alphabetical order? They don't seem to be…
Suggestions are in "library" order, so they are (loosely) grouped by functionality. This is for speed and zero-memory use reasons. To present sorted results, all results need to be obtained first,
then sorted, then the user would be able to scroll through the list. This means there needs to be a list in memory somewhere.
The way it's implemented it simply asks the libraries for the next suggestion given the known first letters. If a library gives up, then it tries the next one. This way there's zero use of memory,
and is extensible since new libraries will be queried as they are added (including user libraries!). The down side is that because there's no list of commands, you can't sort it.
12-17-2018, 10:32 PM
Post: #330
Claudio L. Posts: 1,885
Senior Member Joined: Dec 2013
RE: newRPL - build 1089 released! [update:build 1127]
(12-17-2018 09:38 PM)The Shadow Wrote: Hmm. AUTOSIMPLIFY needs some work yet. It doesn't yet know how to multiply singleton variables. ie, it can't change 'X^3*X' to 'X^4'. It also can't change
'X+X' to '2*X'.
You are right (which is not unusual for you). The rules I have cover X^n*X^m but I need to add X^n*X (seems trivial now!). Same thing for addition, it requires additional rules because X^1 is not the
same as X.
I need to add those rules as level 1.
(12-17-2018 09:38 PM)The Shadow Wrote: It can't change '0*X' to 0, but I suppose that might be because X might be infinite?
Right again, we can't start eliminating things unless we somehow tell the CAS that X is not infinite. We could include this on a higher level (say level 5), in which simplifications are very
aggressive and not necessarily strict mathematically speaking. This would be controlled by a flag, but each level needs to be very clearly documented, the user should not have any doubt about what's
(12-17-2018 09:38 PM)The Shadow Wrote: It also can't handle powers of powers, but I suppose that's probably because of the headaches involving even powers? ie, sqrt(X^2) should strictly speaking
be abs(X), not X.
Yes, I need to add that too, I'm just starting to make sense of this in my head so bear with me.
I need some help to come up with all the simplifications, then we need to group them in levels so the user can control them using flags. Perhaps they shouldn't even be levels, they should be groups,
and the user should have perhaps 8 flags to individually enable/disable each of the 8 groups of rules.
For example, simplifying X*X = X^2 is very basic, but undesirable if X is a matrix, where powers aren't even defined as an operator. So I'd like the user to be able to disable that, but still keep
other basic simplifications on matrix expressions (like 3*A+2*A=(3+2)*A). Same thing if you are writing Boolean expressions, you need to disable some simplifications that can mess them up.
Also, simplifications that assume commutative multiplication need to be grouped separate, so they can be disabled, (I don't want A*B+B*A to become 2*A*B on a matrix expression).
Then we can have a group that could be only enabled for matrix expressions, where A*INV(A) = I, rather than the number 1 for example, or defining proper behavior for boolean algebra.
Even for regular algebraic expressions, simplifying for example .xV/.xV:->1 is not good, as it removes a discontinuity.
So let's come up with a plan to do the simplifications, then we'll move on to other commands that are simpler (most commands in the old CAS are quite simple, just one or 2 rules and that's it, the
complicated ones are the ones to compute derivatives and integrals).
One idea is for most CAS commands that operate on rules, to look for a variable in the current directory called 'COMMAND.RULES' where the user can add rules of their own. For example, AUTOSIMPLIFY
would look for 'AUTOSIMPLIFY.RULES' and apply the user rules after the standard ones. Same thing for other commands like COLLECT, even derivatives and integration could look for user rules. Instead
of posting in a forum "newRPL cannot integrate this...", you can simply store the missing rules on your calc and problem solved forever (or until your next memory loss).
12-18-2018, 01:01 AM
Post: #331
The Shadow Posts: 233
Member Joined: Jan 2014
RE: newRPL - build 1089 released! [update:build 1127]
I like the idea of groups rather than levels, though some of the groups will doubtless be equivalent to levels. I'm thinking of the oldRPL 'Rigorous' flag, which gets extra persnickety about absolute
I also like the idea of user-defined rules, though I hope they search upward the directory tree like usual. That way if there's a rule I want to always use, I can just put it in HOME.
12-18-2018, 01:03 AM
Post: #332
cdmackay Posts: 780
Senior Member Joined: Sep 2018
RE: newRPL - build 1089 released! [update:build 1127]
(12-17-2018 09:47 PM)Claudio L. Wrote: Suggestions are in "library" order, so they are (loosely) grouped by functionality…
thanks for the explanation, Claudio.
Cambridge, UK
41CL/DM41X 12/15C/16C DM15/16 17B/II/II+ 28S 42S/DM42 32SII 48GX 50g 35s WP34S PrimeG2 WP43S/pilot/C47
Casio, Rockwell 18R
12-18-2018, 01:13 AM
Post: #333
The Shadow Posts: 233
Member Joined: Jan 2014
RE: newRPL - build 1089 released! [update:build 1127]
A further thought on groups... I think they should perhaps be organized by what the expected content of a variable is.
That is, a matrix variable should be treated differently than a real variable, which in turn is different from a complex variable. (I'm not sure what sort of variable should hold infinity, though, as
it's not a real or complex number!)
So, rather than having global flags, have the type set for each variable, and apply rules accordingly.
There would still be room for flags denoting degrees of rigor. Sometimes you don't care about eliminating poles or taking account of different square roots, but sometimes you do.
12-18-2018, 03:42 PM
Post: #334
The Shadow Posts: 233
Member Joined: Jan 2014
RE: newRPL - build 1089 released! [update:build 1127]
Also... Matrix powers aren't defined as an operator? I thought integer powers were, though certainly rational powers are exceedingly non-trivial, to say the least!
The square of a matrix seems perfectly well-defined, though?
12-18-2018, 06:16 PM
Post: #335
Claudio L. Posts: 1,885
Senior Member Joined: Dec 2013
RE: newRPL - build 1089 released! [update:build 1127]
(12-18-2018 03:42 PM)The Shadow Wrote: Also... Matrix powers aren't defined as an operator? I thought integer powers were, though certainly rational powers are exceedingly non-trivial, to say
the least!
The square of a matrix seems perfectly well-defined, though?
Yeah, not sure what was in my head when I wrote that, just ignore the Alzheimer parts.
12-18-2018, 10:58 PM
Post: #336
Claudio L. Posts: 1,885
Senior Member Joined: Dec 2013
RE: newRPL - build 1089 released! [update:build 1127]
(12-17-2018 10:32 PM)Claudio L. Wrote:
(12-17-2018 09:38 PM)The Shadow Wrote: Hmm. AUTOSIMPLIFY needs some work yet. It doesn't yet know how to multiply singleton variables. ie, it can't change 'X^3*X' to 'X^4'. It also can't
change 'X+X' to '2*X'.
You are right (which is not unusual for you). The rules I have cover X^n*X^m but I need to add X^n*X (seems trivial now!). Same thing for addition, it requires additional rules because X^1 is not
the same as X.
I need to add those rules as level 1.
I went ahead to add these "simple" rules and started digging deeper: The rules I put there are using .M matches so they would be universal rules that work with matrices too.
Here's the catch: When adding 'n*X+X' rule, it would be '(n+1)*X', except if 'n' is a matrix, it should really be (n+I)*X. The problem is that implicit 1 can be I or 1 depending on the type of 'n'.
First: I'd need to create a symbol I in newRPL that evaluates to an identity matrix of any size when operated upon. This symbol could evaluate to 1 when operating with numbers, but then formulas
would look strange with this I showing up unexpectedly.
I could have the rules always add I, then another rule would replace I with 1 when added to a numeric expression '.NN+I:→.NN+1', but if the expression has some unknown variables (not purely numeric),
the I would remain and the user would be extremely confused.
The details of a CAS are overwhelmingly complex...
12-19-2018, 08:36 AM
(This post was last modified: 12-19-2018 03:16 PM by The Shadow.)
Post: #337
The Shadow Posts: 233
Member Joined: Jan 2014
RE: newRPL - build 1089 released! [update:build 1127]
I'm not clear on why you need a symbol for the identity matrix for those particular rules? Matrices can be multiplied by a scalar just fine. EDIT: In fact, in standard linear algebra you *can't* add
the identity matrix to a scalar.
You might well need the identity matrix in other contexts, though.
EDIT: Where problems do arise is in the rule eliminating an added zero. For matrices, that would have to be the zero matrix. But since you're getting rid of it anyway, maybe it's not too much of a
I wonder if we need a separate MATRIXSIMPLIFY command?
12-19-2018, 07:44 PM
Post: #338
Claudio L. Posts: 1,885
Senior Member Joined: Dec 2013
RE: newRPL - build 1089 released! [update:build 1127]
(12-19-2018 08:36 AM)The Shadow Wrote: I'm not clear on why you need a symbol for the identity matrix for those particular rules? Matrices can be multiplied by a scalar just fine. EDIT: In fact,
in standard linear algebra you *can't* add the identity matrix to a scalar.
You might well need the identity matrix in other contexts, though.
EDIT: Where problems do arise is in the rule eliminating an added zero. For matrices, that would have to be the zero matrix. But since you're getting rid of it anyway, maybe it's not too much of
a problem?
I wonder if we need a separate MATRIXSIMPLIFY command?
I didn't think as far as the zero matrix, will be needed for sure. The identity in the example perhaps wasn't clear because I used the letter n in n*X+X, but I was thinking what if n is a matrix
(replace it with A for visual impact)
Now A*X+X will become (A+ I) *X. The special symbol I should match whatever size allows the addition to matrix A to proceed. The only special thing is that is a matrix of all sizes. The zero matrix
also needs a special all-sizes symbol in case you decided to EVAL or ->NUM the expression eventually. The zero only gets simplified if added to something, if it's by itself you can't remove it.
X-X = 0 (matrix symbol zero) and that result needs to be shown to the user. Problem is... What if the user does ->NUM? What size matrix would it be?
12-19-2018, 07:57 PM
(This post was last modified: 12-19-2018 08:00 PM by The Shadow.)
Post: #339
The Shadow Posts: 233
Member Joined: Jan 2014
RE: newRPL - build 1089 released! [update:build 1127]
Ah, I see. I'm wondering if it's too cumbersome to have a set of rules for both reals and matrices. Like I said before, maybe we should declare the types of variables so that the right rules are used
for it. Like, maybe variables are real by default, so 'X' is assumed real; but 'X.MAT' is assumed matrix.
Alternatively, have a separate MATRIXSIMPLIFY command. AUTOSIMPLIFY would then assume that variables are real. Flipping a flag would then cause EVAL to MATRIXSIMPLIFY rather than AUTOSIMPLIFY when
used on an algebraic object. (It would still use AUTOSIMPLIFY on the entries of a matrix.)
12-19-2018, 08:04 PM
Post: #340
The Shadow Posts: 233
Member Joined: Jan 2014
RE: newRPL - build 1089 released! [update:build 1127]
Also... Do we actually *need* to be able to handle matrix equations? OldRPL can't.
User(s) browsing this thread: 1 Guest(s)
|
{"url":"https://hpmuseum.org/forum/thread-9700-post-109114.html","timestamp":"2024-11-11T06:48:54Z","content_type":"application/xhtml+xml","content_length":"87719","record_id":"<urn:uuid:0d150911-708c-42f7-926c-a5008f2f6579>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00875.warc.gz"}
|
cement finish mill ball charge
First results have already shown the extensive potential for determining wellfounded rules for the composition of ball charges in mills for the finish grinding of cement. This project was funded
WhatsApp: +86 18838072829
The raw mill will be operate with a much coarser ball charge than the cement mill mainly because of the bigger slot sizes of the partition wall. The transition zone consists of 50 mm and 60 mm
balls. The basis is a 50 mm ball size for 5% residue on the 4 mm sieve. In case bigger grains are bypassing the partition through the center grate even ...
WhatsApp: +86 18838072829
The ball charge typically occupies around 30%36% of the volume of the mill, depending on the mill motor power and desired energy consumption and production rates. Air is pulled through the...
WhatsApp: +86 18838072829
Optimization of continuous ball mills used for finishgrinding of cement by varying the L/D ratio, ball charge filling ratio, ball size and residence time R. Schnatz Add to Mendeley https://// Get
rights and content Abstract
WhatsApp: +86 18838072829
Ball mills and grinding tools Cement ball mills are typically twochamber mills (Figure 2), where the first chamber has larger media with lifting liners installed, providing the coarse grinding
stage, whereas, in the second chamber, medium and fine grinding is carried out with smaller media and classifying liners.
WhatsApp: +86 18838072829
By rotation, the mill elevates the ball charge and material and drops the load upon itself. Comminution Mechanisms ... The vast majority of cement plants in North America use " diameter balls as
the largest ball in cement finish mills.
WhatsApp: +86 18838072829
CERAMIC LINED BALL MILL. Ball Mills can be supplied with either ceramic or rubber linings for wet or dry grinding, for continuous or batch type operation, in sizes from 15″ x 21″ to 8′ x 12′.
High density ceramic linings of uniform hardness male possible thinner linings and greater and more effective grinding volume.
WhatsApp: +86 18838072829
Water Spray in Cement Mills. Water spray installed generally in second compartment of ball mill to control cement temperature. Cement discharge temperature should be kept below about 110 o C but,
the same time should allow some 60% dehydration of gypsum to optimize cement strength without excessive false set.
WhatsApp: +86 18838072829
For cement finish mills, we use both a 325 mesh (45 micron) and blaine ... As mentioned previously, the mill ball charge is the major factor in loss of material head or resistance to material
flow in the mill. Big balls have a low specific surface area (Ft 2/Ft3) and, ...
WhatsApp: +86 18838072829
Mono Chamber Raw Mill Ball Charge Design. Hi experts, I want to design a ball charge for a monochamber raw mill in a white cement plant and wanted to know your opinion about it. ... This is the
raw mill not the finish mill. So Wi should be and density of as normal kiln feed. I used d80 of 3000 micron to be on safe side and got max ...
WhatsApp: +86 18838072829
Ball Mill. Cement Lafarge max R5% >25mm, Holcim <50mm. Standard offer from mill manufacturer is R5% >30mm ... Ball and liner coating can occur in raw as well as finish grinding. Ball coating can
be a result of the following conditions: ... Cement Mill. The ball charge tendency in the 1 st compartment is to use the coarser of the gradings available.
WhatsApp: +86 18838072829
[2] Bentz, Garboczi, Haecker, and Jensen (1999), "Effects of Cement Particle Size Distribution on Performance Properties of Portland CementBased Materials", Cement and Concrete Research, 29 ...
WhatsApp: +86 18838072829
Optimization of continuous ball mills used for finishgrinding of cement by varying the L/D ratio, ball charge filling ratio, ball size and residence time Authors: R. Schnatz Abstract...
WhatsApp: +86 18838072829
It gives also a rough interpretation of the ball charge efficiency: Ball top size (bond formula): calculation of the top size grinding media (balls or cylpebs): Modification of the Ball Charge:
This calculator analyses the granulometry of the material inside the mill and proposes a modification of the ball charge in order to improve the ...
WhatsApp: +86 18838072829
Finish Mill Ball Movement • Movement depends on these factors: • Mill Speed • Ball Charge • Liner Shape ... 30 ωcritical = or (rpm ) r Di Ball Charge • Mills for cement grinding normally operate
at a 2535% "filling degree " ...
WhatsApp: +86 18838072829
Also with low ball charge slurry pool will reduce grinding efficiency. Beside lower ball charge will increase P80 of ball mill because of less number of impacts but mean residence time will
increase by lower ball charge because of more volume to occupy. I think at last in this situation, power consumption will increase.
WhatsApp: +86 18838072829
the charge will increase and may cause mill's overcharging, specially if the charge is already at the trunnion level. One should be careful to reduce the power target when converting a charge
from ... Regrind ball mill CuZnPb 40 5 mm 100% 40% Same Regrind ball mill AuCu 90 6 mm 100% 39% Same Regrind ball mill Au 130 8 mm 30 ...
WhatsApp: +86 18838072829
Cement Finish Milling (Part 2: Comminution) ... that is a challenge with the ball mills, especially as you said when grinding various cement types. ... Blaine, etc (and the optimal charge for the
WhatsApp: +86 18838072829
Figure: Grinding media Grinding Ball Charge in Mills: According to Levenson, the optimum grinding ball charge should be r The degree of ball charge varies with in the limit of 25 and 45%. ...
this was proved by two years production records of two different size cement finish mills, installed side by side and grinding the same feed ...
WhatsApp: +86 18838072829
For cement finish mills, we use both a 325 mesh (45 micron) and blaine ... Careful consideration must also be given to ball charge design, material, material load and the grinding action required
in that compartment in order to design the appropriate liner. Technical Training. 4 21 Introduction to Cement Manufacturing
WhatsApp: +86 18838072829
Cement mill 812 kWh/t. Raw mill 40 45 % of total mill power consumption. Diaphragm slot openings are : 68 mm in the first compartment. 810 mm in the second compartment. Prerequisites. Mill
feedsize. Clinker and additives 95% passing 25 mm; 100 % passing 50 mm. Raw material 95% passing 30 mm; 100 % passing 50 mm.
WhatsApp: +86 18838072829
Here, comminution takes place in the rolling pointcontact zone between each charge ball. An example of a two chamber ball mill is illustrated in Fig. 15. Fig. ... not least because the specific
power consumption of vertical mills is about 30% less than that of ball mills and for finely ground cement less still. The vertical mill ...
WhatsApp: +86 18838072829
CEMENT FINISH GRINDING MILL BALL CHARGE CALCULATION Cement mill ball charge calculation Grinding of cement accounts around 40 % Electricity bill at a Cement Plant. For economic efficient
operation of a cement mill correct quantity, size and size range of grinding media plays a vital and pivotal role. To get most out of the grinding media, it ...
WhatsApp: +86 18838072829
The relining time also reduced by % with the new liners. Powell et al., 2006; Rajamani, 2006;Yahyaei et al., 2009;MalekiMoghaddam et al., 2013;MalekiMoghaddam et al., 2015;Cleary and Owen ...
WhatsApp: +86 18838072829
General L/D ratios Raw mills: < L/D < Finish / cement mills: < L/D < Ball Mill Grinding Process Handbook Page 3 of 26 HeidelbergCement Group Guidance Paper Edition ... After the new ball charge
mill audit should be carried out with meter sampling.
WhatsApp: +86 18838072829
In combination with adjustments to the ball charge in the mill, a 3040% capacity increase can be attained. For a well operated system, an overall specific grinding power savings of about 10% can
be realized. ... OK mill system to supplement its existing cement ball mills. FINAL FINISH GRINDING SYSTEM DESCRIPTION The new finish grinding area ...
WhatsApp: +86 18838072829
A cement mill (or finish mill in North American usage [1]) is the equipment used to grind the hard, nodular clinker from the cement kiln into the fine grey powder that is cement is currently
ground in ball mills and also vertical roller mills which are more effective than ball mills.
WhatsApp: +86 18838072829
A 10 MW cement mill, output 270 tonnes per hour. A cement mill (or finish mill in North American usage [1]) is the equipment used to grind the hard, nodular clinker from the cement kiln into the
fine grey powder that is cement. Most cement is currently ground in ball mills and also vertical roller mills which are more effective than ball mills.
WhatsApp: +86 18838072829
Ball mill with Central Drive Mill length Mil l diame te r M il l di amet er 5 3 2 3 2 1 3 5 6 6 4 4 Ball mill with Sid e Driv 1 Inlet 2 Outlet casing 3 Slide shoe bearing 4 Main gearbox 5 Mill
motor 6 Auxiliary drive 1 3 Ball mill for cement grinding 3 Cement grinding taking on the tough tasks Cement ball mills have to achieve the desired ...
WhatsApp: +86 18838072829
When only the HPRM system or the ball mill system was operated to produce cements of the same fineness as the combined process cement, the system throughput was reduced to t/h in the HPRMonly
case and to t/h in the ballmillonly case, with the corresponding specific energy consumptions of kWh/t and kWh/t. The works ...
WhatsApp: +86 18838072829
Regular ball sorting is a must to maintain tube mill efficiencies and avoid losses of up to 1020%. The quality of the sorting and its frequency are both critical. 1st chamber: balls below a
specific dimension have to be rejected to avoid overloading or even back spilling effects. Sorting to repeat every 12 years. 2nd chamber: the balls ...
WhatsApp: +86 18838072829
The energy consumption of the total grinding plant can be reduced by 2030 % for cement clinker and 3040 % for other raw materials. The overall grinding circuit efficiency and stability are
improved. The maintenance cost of the ball mill is reduced as the lifetime of grinding media and partition grates is extended.
WhatsApp: +86 18838072829
|
{"url":"https://www.asdroue-drouette.fr/9177/cement/finish/mill/ball/charge.html","timestamp":"2024-11-07T15:39:40Z","content_type":"application/xhtml+xml","content_length":"28686","record_id":"<urn:uuid:a12b886a-92a7-483d-908c-6b2521a7927a>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00430.warc.gz"}
|
I Wanna Go Green… So Show Me The Math!
You can’t watch the news, turn on the radio, or open a newspaper these days without hearing about global warming. It seems our future is looking rather warmish, and many of our modern conveniences
may be to blame.
I’m not disputing the fact, but at the same time I’ve never had it explained to me precisely and quantitatively how many of the choices I make each day are contributing to global warming. I mean, I
understand the link between driving my car and carbon emissions. The family minivan spews carbon dioxide for goodness sake, so I clearly understand how running my daily errands in a gasoline-powered
car can contribute to the problem.
But what about the rest of my daily routine… what impact does it have? If I roast a chicken in the crock-pot instead of my electric oven, does is make a difference? What if I grill instead? What is
the impact of watering my lawn, or taking a hot shower? How do the decisions I make each and every day impact the environment in terms of energy consumption and greenhouse gas emissions? And what
does it mean when we’re told that something produces 100 lbs of CO2 each year? Is that a lot?
I went in search of all of the facts and figures needed to quantify how the little things I do each day translate into fossil fuel usage. I love numbers, formulas, and equations… so I gathered as
many as I could find. The effort was worthwhile, because it’s helped me to develop a picture of how many of my short-term decisions have long-term impacts.
If you’re interested in determining what many of the things you do each day relate to this global warming issue we’re hearing so much about… read on! Information is never a bad thing – and once you
have it, you are at least armed with additional facts to consider as you go about your day.
First, let’s talk units. Global warming is all about an increase in greenhouse gas emissions, which are “thickening” our atmosphere and preventing more and more solar heat from escaping. The
greenhouse gas we hear the most about is carbon dioxide, because it is what we see the largest quantities of, and is what has been increasing so dramatically over the past 50 years. There are other
greenhouse gases that are also on the rise, but initially we’ll concentrate on carbon dioxide.
As we go about our day, we do things that directly contribute to greenhouse gas emissions (like driving our cars), and indirectly contribute (like using electricity that is produced by burning fossil
fuels, which generates carbon dioxide). To compare the impact of it all, we will look at everything in terms of the pounds of CO2 that are produced as a result of our activities and energy
Let’s start with the most direct contributor first: automobile emissions. According to the EPA, a gallon of gasoline produces 19.4 pounds (or 8.8 kilograms) of CO2. (1)
In our family, our minivan gets an average of 20 mpg. So for each individual mile we burn approximately 1/20th or 0.05 gallons. So, using the approximation above, that equates to 0.97 lbs of CO2 per
mile… almost one pound per mile.
So those drizzily mornings when I drive my daughter the 5-mile round trip to drop her off at her middle school instead of letting her take the bus, I burn up about a quarter of a gallon of gas, and
produce 4.85 pounds of CO2. In actuality, it is probably a little more, because cars are least efficient during the first few miles of the day while the engine is still cold. My car is probably just
warming up to its peak efficiency just as I’m returning to the garage.
Hmmm… maybe it’s time to invest in an umbrella.
When you receive your electrical bill each month, your usage is most likely presented in terms of kilowatt-hours (kWh).
The national average emissions factor for electricity is 1.37 pounds CO2 per kWh. (2) Again, this is an average. Different sources of electricity clearly produce different levels of emissions.
Burning coal and natural gas are on the high end, while solar power and wind power are considered to be “green” or renewable sources that do not contribute to greenhouse emissions at all.
If you are interested in discovering specifically what sources are used for electricity in your area, your can contact your utility company and find out. In California, utility companies provide a
Power Content Label (3) which gives a breakdown of what percentage of the power they supply comes from various energy resources. But for now, we’ll use the national average above.
So how do you determine how much energy you’re using, and therefore how much CO2 you are emitting, as you putter around your home each day? Well, most appliances come with an Energy Guide label that
gives an estimation of what the energy use of that appliance is in terms of kWh per year. But if you really want to know specifically what’s going on in your home, you need to find out what the
wattage of the appliances you use, and determine the number of hours per day that you use them.
Unlike lightbulbs, not everything is stamped with its wattage information. If you’re really a hands-on kind of person, you can still estimate it by finding the current draw (in amperes) and
multiplying that by the voltage used by the appliance. Most appliances in the United States use 120 volts. Larger appliances, such as clothes dryers and electric stoves, use 240 volts. The amperes
might be stamped on the unit in place of the wattage. If not, find a clamp-on ammeter—an electrician's tool that clamps around one of the two wires on the appliance — to measure the current flowing
through it. You can obtain this type of ammeter in stores that sell electrical and electronic equipment. Take a reading while the device is running; this is the actual amount of current being used at
that instant. (4)
If you don’t feel like getting down to that level of nitty-gritty, there are plenty of tables available that will provide an estimated wattage rating for major appliances. To help me answer the
question about using my crock-pot instead of my oven to roast a chicken for example, I looked up the estimated wattage of the two appliances so I could compare.
According to my source (5), the wattage of an electric oven is approximately 4000 W, while a crock-pot uses approximately 250 W. So, I can roast a chicken for 2 hours in the oven, or for 5 hours in
the crock-pot. Does it make a big difference? Let’s see:
Oven: (4000 W) x (1 kW/1000 W) x (2 hours) = 8 kWh
Crock-pot: (250 W) x (1 kW/1000 W) x (5 hours) = 1.25 kWh
Now, like many appliances, ovens and crock-pots aren’t “on” the entire time they are being used. They heat up to the temperature required, and then cycle on and off as many times as are needed to
maintain that temperature. What percentage of time are they off? I actually don’t know, and it is dependent on many factors (including how many times you open the oven door or lift the lid of the
crock-pot to check on your meal). But let’s assume over a period of time, our cooking appliances are actually on for 70% of the time. (This is a guess folks, completely off the top of my head. If
anyone has any real figures on this – please let me know!)
Back to our chicken. If we reduce our calculations by 30%, we have comparative figures for our two appliances of:
Oven: 5.6 kWh at 1.37 pounds CO2 per kWh corresponds to 7.67 lbs of CO2
Crock-pot: 0.875 kWh at 1.37 pounds CO2 per kWh corresponds to 1.20 lbs of CO2
So, by roasting a chicken in the crock-pot, I can save approximately 4.7 kWh, and avoid corresponding emissions of approximately 6.4 lbs of CO2. Not to mention avoid heating up my kitchen by using
the oven, which impacts another energy hungry beast in my house – the air conditioner. Nice!
For those that would rather sauté their chicken on the stove, or throw it on the barbeque, there are conversion factors to calculate CO2 emissions from the use of natural gas and propane as well.
If you get a bill each month for natural gas, most likely it is going to display your gas usage in terms of therms. One therm is equivalent to 100,000 BTU, and one therm of natural gas generates 11.7
pounds of CO2 (6). To determine your energy usage and corresponding emissions from using your gas stove, you first need to discover your stove’s BTU input rate. Most major gas appliances have a fuel
rating plate attached to them that will tell you its hourly BTU input. But just as with electrical appliances, if you cannot find the specific information on your appliances, there are several tables
available that can help you estimate the values.
The estimated energy usage for a gas stovetop is 9000 BTUs/hr (7). So if you decide to make a nice chicken sauté, a half-hour of cook time on the stove would equate to:
(9000 BTU/hr) x (.5 hr) x (1 therm/100,000 BTU) x (11.7 lbs CO2/therm) = 0.53 lbs CO2
What if you like to grill? During the summer months, there’s not much that can beat a great meal prepared on the BBQ. But what is the impact to using a propane-fueled grill? Well, the conversion
factor for propane is 1 gallon of propane generates 11 pounds of CO2 (7). But to determine how quickly your grill burns through a gallon of propane, you will once again need to have an idea of what
the BTU rating is for your particular grill.
Generally you will find that the BTU rating of most grills is approximately 10,000 BTU per the number of burners. So if you have a 4-burner grill, you’re looking at a grill in the neighborhood of
40,000 BTUs/hr. Now given the amount of technical data available on most grills (to compare against the neighbors’, of course), you may know exactly what the BTU rating is for your
6-burner-with-rotisserie-and-smoker stainless steel behemoth. But if the particulars have slipped your mind, you can use the 10,000 BTU/burner rule.
So to determine the propane used (and CO2 generated) when you throw some chicken on the grill, the last piece of information you will need to know is that each gallon of propane is equivalent to
91,502 BTU (8). Given that, what does the math tell us is the result of a half-hour of grilling on our Weber 4-burner?
(40,000 BTU/hr) x (0.5 hr) x (1 gal/91,502 BTU) x (11 lbs CO2/gal) = 2.40 lbs CO2
And what if you decide to use a charcoal grill instead of propane? A charcoal grill operated for an hour will emit approximately 11 pounds of carbon dioxide (9). And unlike a gas grill, it cannot be
turned off once you pull your meat off. So although there are certainly flavor benefits to cooking over a smoky grill, if your concern this 4th of July is reducing your environmental impact, then
propane is the way to go.
It’s easy to miss the link between water usage and carbon emissions. Water of course is not a source of greenhouse gas itself, but the electricity required to transport, treat, and distribute it
certainly is!
According to the CEC (10), the typical energy use for urban drinking water supply is comprised of the following segments:
Conveyance: Average energy use – 100 kWh/MG
Treatment: Average energy use – 250 kWh/MG
Distribution: Average energy use – 1,150 kWh/MG
When you add it all together, on average it takes 1,450 kWh per Million Gallons (MG) to deliver clean water to our homes.
So how much water do we use during our typical daily activities? Well, let’s begin with our daily shower.
Conventional Showerhead (Avg. 4 gpm)
(4 gal/min) x (10 min) x (1 MG/ 1,000,000 gal) x (1450 kWh/MG) = 0.058 kWh
Low-Flow Showerhead (2.5 gpm)
(2.5 gal/min) x (10 min) x (1 MG/ 1,000,000 gal) x (1450 kWh/MG) = 0.036 kWh
Then using our previously discovered conversion to CO2 for electricity:
Conventional: (0.058 kWh) x (1.37 lb CO2/kWh) = 0.079 lbs CO2
Low-Flow: (0.036 kWh) x (1.37 lb CO2/kWh) = 0.049 lbs CO2
So one shower by itself doesn’t have a huge impact, regardless of the showerhead used. But not many of us out there are happy to stop at one shower for a lifetime. Let’s assume we take 6 showers a
week (maybe we take Saturday off?), every week, each year.
Conventional: (0.079 lbs CO2) x (6 days/week) x (52 weeks/yr) = 24.65 lbs CO2/year
Low-Flow: (0.049 lbs CO2) x (6 days/week) x (52 weeks/yr) = 15.28 lbs CO2/year
And how many people are in your household (hopefully) taking showers on a regular basis? If you are a family of four, with each person taking the equivalent of 6 ten-minute showers per week:
Conventional: (24.65 lbs CO2/year) x 4 = 98.60 CO2/year
Low-Flow: (15.28 lbs CO2/year) x 4 = 61.12 CO2/year
What about other daily uses of water? Where we live, the lawn must be watered just about daily if we want our grass to remain anything close to green during the summer months. So how much water and
electricity is being used each morning when we hear the sprinklers kick on, and what is the impact?
To determine the gallons of water that are used during your daily watering cycle, calculate the gallons per minute (gpm) used by each zone of your sprinkler system.
Here’s an example. Let’s say that after a quick look over your front and back lawns, you determined that you had the equivalent of 20 “full-circle” heads in your entire system. Also each of your
zones is set to water for the same amount of time each morning, 12 minutes. Skipping a step or two, you can just multiply out the total gpm, and the resulting water usage, for the whole system.
(20 sprinklers) x (3.0 gpm/sprinkler) x (12 min/day) = 720 gallons/day
This equates to:
(720 gallons/day) x (1 MG/ 1,000,000 gal) x (1450 kWh/MG) = 1.044 kWh/day
(1.044 kWh/day) x (1.37 lb CO2/kWh) = 1.43 lb CO2/day
Assuming the sprinklers run 5 mornings a week through the 13 weeks of summer:
(1.43 lb CO2/day) x (5 days/week) x (13 weeks) = 92.95 lbs CO2
So in this case, just running the sprinklers through the summer uses almost the same amount of resources and has the same impact of a family of four showering for an entire year!
All of our equations have helped us to quantify the amount of carbon dioxide that is emitted into the atmosphere as a result of the electricity, water, natural gas and gasoline that we use. But how
much is too much? And what do these pounds of carbon dioxide mean in terms of something we can better understand?
Natural components or our earth’s ecosystem are doing their best to offset the carbon dioxide produced each year. The oceans, trees, plants, and soil have the natural ability to absorb carbon dioxide
through photosynthesis – the process through which plants and trees convert water, sun, and carbon dioxide into fuel and nutrients, while producing oxygen as a byproduct. As efficient as these
natural “carbon sinks” are however, they can no longer keep up with the accelerated rate that carbon dioxide is being released through the use and burning of fossil fuels.
Of the approximately 8 billion tons of carbon emitted each year, scientists believe about 30 percent is absorbed by the oceans, and about 30 percent is absorbed by terrestrial ecosystems, especially
trees. The remaining 40 percent however, accumulates in the atmosphere (12).
So if over time, the oceans begin to max-out on their carbon absorption capabilities, our only remaining natural resource to counterbalance growing carbon dioxide emissions is plant life… especially
trees. But how many trees does it take to absorb the CO2 that is being emitted as a result of our daily activities?
Different trees absorb CO2 at different rates.
(1.4 MT C/acre) x (44 units CO2 / 12 units C) x (2203 lbs/MT) = 11,308.7 lbs CO2
Each year, an acre of Douglas fir trees can absorb 11,308.7 lbs of carbon dioxide.
But what about just one tree?
A medium growth coniferous (evergreen) tree, planted in an urban setting and allowed to grow for 10 years, sequesters 23.2 lbs of carbon (14). This estimate assumes the trees are planted when they
are approximately 4.5 feet tall (the typical size of tree purchased in a 15-gallon container). Once again, we need to convert this estimate to pounds of CO2 removed from the atmosphere.
(23.2 lbs C) x (44 units CO2 / 12 units C) = 85.1 pounds of CO2
A medium growth evergreen tree, planted in an urban setting and allowed to grow for 10 years, absorbs approximately 85 pounds of carbon dioxide.
So every tree can make a difference, but clearly there is a vast difference in scale in terms of how quickly CO2 can be released into the atmosphere compared to how long it would take for that same
amount to be absorbed naturally. For example, it would take a single tree 10 years to absorb the carbon dioxide emitted in less than 90 minutes by a typical car driving on the freeway.
For further illustration, here are some additional “weekly activities” and what they equate to after a month in terms of energy consumption, CO2 emissions, and equivalent fossil fuel use. Also
included is the number of evergreen tree seedlings that would need to be planted (and left to grow for 10 years) to offset the resulting emissions.
* Monthly costs are based on: $0.10 per kWh, $3.00 per gallon for gasoline, $1.20 per therm for natural gas, $1.90 per gallon for propane, and $5.00 per 10 lb. bag of charcoal.
** Coal emissions were estimated by averaging the carbon coefficients for bituminous and sub-bituminous categories of coal, which make up over 90% of the coal used in the U.S. (15)
SO WHAT DOES IT ALL MEAN?
The topic of global warming is a controversial one. There are as many people confident that the temperature increases we are experiencing are part of a natural cycle, as there are people that are
convinced that it is the beginning of a steep incline like nothing we’ve seen before. Despite where the truth lies, there is no dispute that most of the things we do as part of our daily
industrialized life result in significant carbon emissions into our atmosphere.
What started for me as mere curiosity, has resulted in the discovery that as a single individual my impact on this planet is much larger than I would have guessed. It was eye-opening, and surprising
– and definitely has made me fold some additional factors into my every-day decision making. Now instead of making choices based solely on what are the easiest and fastest options, I think also of
the long-term impacts.
As I said before, information is never a bad thing. And the more we have in our own back pocket, the less we need to rely on others to feed it to us - either with or without their own personal spin.
There's a lot of information here, but still one question that hasn't been answered yet. How do you roast a chicken in a crock-pot?
Well, I'm glad you asked...
GOTTA GO GREEN - CHICKEN IN A POT
1 cup baby carrots
4-6 small red potatoes - halved
1 onion - sliced
4-5 lb. whole chicken
2 tsp. garlic salt
1/2 tsp. coarse black pepper
1 tsp. dried basil
1 tsp. poultry seasoning
1/2 cup water, chicken broth, or white wine
Place vegetables in bottom of crock-pot. Place chicken on top of vegetables. Add seasonings and water/broth/wine.
Cover. Cook on low 8-10 hours, or High 4-5 hours, until juice runs clear from chicken. (use 1 cup liquid of cooking on High)
Makes 6 servings.
(1) http://www.epa.gov/otaq/climate/420f05004.htm
(2) http://www.epa.gov/climatechange/emissions/ind_assumptions.html
(3) http://www.energy.ca.gov/consumer/power_content_label.html
(4) http://www.oru.com/energyandsafety/energyefficiency/calculatingenergyuse.html
(5) http://www.powerhousetv.com/stellent2/groups/public/documents/pub/phtv_000296.pdf
(6) http://revelle.net/lakeside/lakeside.new/understanding.html
(7) http://www.wisconsinpublicservice.com/home/appcalc_gas.asp
(8) http://www.coxontool.com/index.php/Airstream/Propane
(9) http://www.ornl.gov/info/press_releases/get_press_release.cfm?ReleaseNumber=mr20030703-00
(10) http://www.solar2006.org/presentations/forums/f23-cooney.pdf
(11) http://www.rainbird.com/landscape/technical/articles/scheduling.htm
(12) http://www.science20.com/news/bad_for_carbon_offsets_not_all_trees_are_the_same_at_reducing_global_warming
(13) http://www.spiegel.de/international/world/0,1518,483540,00.html
(14) http://www.usctcgateway.net/tool/
(15) http://www.eia.doe.gov/oiaf/1605/coefficients.html/
|
{"url":"https://www.science20.com/science_motherhood/i_wanna_go_green%E2%80%A6_so_show_me_math-2490","timestamp":"2024-11-12T15:50:13Z","content_type":"text/html","content_length":"57657","record_id":"<urn:uuid:71d04f77-c765-459e-9184-436f54c218ba>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00648.warc.gz"}
|
Optimal Response to Epidemics and Cyber Attacks in Networks
This article introduces novel formulations for optimally responding to epidemics and cyber attacks in networks. In our models, at a given time period, network nodes (e.g., users or computing
resources) are associated with probabilities of being infected, and each network edge is associated with some probability of propagating the infection. A decision maker would like to maximize the
network's utility; keeping as many nodes open as possible, while satisfying given bounds on the probabilities of nodes being infected in the next time period. The model's relation to
previous deterministic optimization models and to both probabilistic and deterministic asymptotic models is explored. Initially, maintaining the stochastic independence assumption of previous work,
we formulate a nonlinear integer program with high-order multilinear terms. We then propose a quadratic formulation that provides a lower bound and feasible solution to the original problem. Further
motivation for the quadratic model is given by showing that it alleviates the assumption of stochastic independence. The quadratic formulation is then linearized in order to be solved by standard
integer programming solvers. We develop valid inequalities for the resulting formulations.
Original language American English
Media of output Departmental Seminar/Colloquium
Place of Publication Bar-Ilan University, Ramat Gan; Department of Management
State Published - 2013
Dive into the research topics of 'Optimal Response to Epidemics and Cyber Attacks in Networks'. Together they form a unique fingerprint.
• Goldberg, N. (Speaker)
2013 → …
Activity: Talk or presentation › Oral presentation
|
{"url":"https://cris.biu.ac.il/en/publications/optimal-response-to-epidemics-and-cyber-attacks-in-networks-4","timestamp":"2024-11-06T18:03:37Z","content_type":"text/html","content_length":"52416","record_id":"<urn:uuid:6ed4b3af-e93d-409f-8cfb-85e9a7c5f05e>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00646.warc.gz"}
|
Jack decides to follow a certain diet plan. He drinks 4 quart of water on day one. Each day, he increases his water intake by 1 pint. How many gallon of water will he drink on day 21 of his plan?
Jack decides to follow a certain diet plan. He drinks 4 quart of water on day one. Each day, he increases his water intake by 1 pint. How many gallon of water will he drink on day 21 of his plan?
|
{"url":"https://byjus.com/question-answer/jack-decides-to-follow-a-certain-diet-plan-he-drinks-4-quart-of-water-on/","timestamp":"2024-11-09T23:15:16Z","content_type":"text/html","content_length":"153667","record_id":"<urn:uuid:b2b89269-2e30-4f25-8035-d86f40968589>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00060.warc.gz"}
|
We provide free access to qualified tutors across the U.S.
Find Riverside, CA Statistics Tutors For Lessons, Instruction or Help
A Tutor in the Spotlight!
matthew o.
• Phelan, CA 92329 ( 30.1 mi )
• Statistics Tutor
• Male Age 37
• Member since: 08/2012
• Will travel up to 40 miles
• Rates from $20 to $40 /hr
• On-line In-Home In-Group
• Also Tutors: Algebra III, Math, Trigonometry, Pre-Calculus, Pre-Algebra, Logic, Geometry, College Algebra, Calculus II, Calculus I, Algebra II, Algebra I
Read More About matthew o...
Have a B.Sc. degree in chemical engineering from UCLA. Have tutored students in algebra, pre- calculus, calculus, and chemistry in the past. Awards: American Math Competition School Winner and AIME
qualifier (award to the top 5% of the competition) Putnam Math Participant Employee of the month award (UCLA) Dean's Honors List (UCLA, GPA: 3.9) Dean's List (UC Merced)
Neil P.
• Santa Ana, CA 92703 ( 28 mi )
• Experienced Statistics Tutor
• Male Age 46
• Member Since: 12/2011
• Will travel up to 10 miles
• Rates from $30.00 to $40.00 /hr
• On-line In-Home In-Group
• Also Tutors: Math, Trigonometry, Pre-Calculus, Pre-Algebra, College Algebra, Calculus II, Calculus I, Algebra II, Algebra I
My comments on tutoring Statistics : I have assisted many students earn Top grades in Statistics.
My goal as an academic tutor is to help you or your student understand important concepts and formulas, while applying them to any problems or exercises that are gone over in lecture or assigned for
Since 2001, I have been privileged to have tutored over 700 students in the very intense subjects of Organic chemistry, General chemistry, Calculus, and Physics. I take pride in my tutoring, in that
my students will succeed inside and outside of the classroom. I have tutored over 25,000 hours since 2001 both paid and volunteer peer tutoring. I will go the extra mile to see that you or you
student succeed and reach the pinnacle of their potential!
Anthony A.
• Santa Ana, CA 92704 ( 28.6 mi )
• Expert Statistics Tutor
• Male Age 30
• Member Since: 01/2015
• Will travel up to 50 miles
• Rates from $22.00 to $150.00 /hr
• On-line In-Home In-Group
• Also Tutors: Math, Pre-Algebra, Geometry, College Algebra, Algebra II, Algebra I
Flexible Tutor ready to work with you!!
Teaching Style varies depending your child’s learning style. Not a “one size” fits all idea. I adapt to them and that way can best see improvement.
I have been in the education department teaching and tutoring for over a decade now, therefore, I will work with each student and help them to the best of my ability. I enjoy seeing students improve
their grades and letting me know they feel more confident with testing and homework. Children are the future pillars of society and the future of tomorrow so what they know should be reinforced
today. The only way to contact me is at arr1aga@outlook.com or (424) 361-8497 please or I won't be able to read your messages and get back to you thank you!. Thank You!
Chang H.
• Chino Hills, CA 91709 ( 16.3 mi )
• Experienced Statistics Tutor
• Male
• Member Since: 04/2012
• Will travel up to 10 miles
• Rates from $50.00 to $60.00 /hr
• On-line In-Home In-Group
• Also Tutors: Algebra III, Trigonometry, Pre-Calculus, Pre-Algebra, Geometry, College Algebra, Calculus II, Calculus I, Algebra II, Algebra I
Math and Chem Tutor (MaCT)
Have a PhD in organic chemistry. Have been teaching assistants in almost undergraduate level chemistry classes (general chemistry, organic chemistry, and analytical chemistry) during graduate school.
Have more than 13 years tutoring experience. Paid my undergraduate college education through tutoring and scholarships. My strongest subjects are chemistry and mathematics.
I would like to explain a global picture of a class material, what to expect from learning the class. All of this is not to scare students, but to mentally prepare them with the challenge ahead. I
like to give my student quizzes after the tutoring, just to make sure they understand. This is a feedback process, so I could adjust my tutoring style accordingly.
Dea T.
• Glendora, CA 91741 ( 25 mi )
• Statistics Tutor
• Female Age 64
• Member Since: 12/2014
• Will travel up to 10 miles
• Rates from $35.00 to $60.00 /hr
• On-line In-Home In-Group
• Also Tutors: Math, Trigonometry, Pre-Algebra, Geometry, Algebra II, Algebra I
A Little Bit About Me...
I am committed to improving student's test scores, grades and confidence in the area of Math in a effective, fun and personalized approach to best meet each individual needs. I have a Master's degree
in Civil Engineering and have been tutoring various levels of Math for over 20 years at both American and French Schools. I have taught math to hundreds of students and my areas of expertise are:
-Pre-Algebra -Algebra I & II -Geometry -Statistics I prefer to teach at a library convenient for both of us, however also open to meeting at a local coffee shop if this is most convenient for you.
Fees range from $25-35 depending on location and topic of tutoring session and subject. Please feel free to call or email me with your contact information, including location, learning concerns or
subjects to focus on, as well as any questions you have. Thank you, and I look forwa ... read more
Hector O.
• Riverside, CA 92503 ( 4 mi )
• Statistics Tutor
• Male Age 33
• Member Since: 01/2015
• Will travel up to 50 miles
• Rates from $15.00 to $35.00 /hr
• In-Home In-Group
• Also Tutors: Math, Pre-Calculus, Pre-Algebra, Geometry, Calculus II, Calculus I, Algebra II, Algebra I
More About This Tutor...
Have worked as a tutor for Sylvan Learning Center for over two years. Have worked with all types of students from different backgrounds. Instruct k-12 students on any level of math in which they are
in. Run Sylvan based curriculum for specified math programs. I am used to working with 3 students at a time, larger groups are not an issue.
Sunanda A.
• Fontana, CA 92336 ( 12.4 mi )
• Statistics Tutor
• Male
• Member Since: 11/2013
• Will travel up to 20 miles
• Rates from $30.00 to $35.00 /hr
• In-Home In-Group
• Also Tutors: Math, Trigonometry, Pre-Calculus, Pre-Algebra, Geometry, College Algebra, Calculus II, Calculus I, Algebra II, Algebra I
Very experienced Math tutor guarantees student success.
Innovative and resourceful retired professional seeking to contribute proven teaching and motivational techniques to maximize student mathematical comprehension. Holds a BS in Mathematics and a PH.D
in Engineering. • Over 10 years experience in teaching Mathematics to undergraduates. • One year experience teaching Mathematics in a public school. • Tutoring High School and Junior High School
students in Mathematics.
Ramez E.
• Rancho Cucamonga, CA 91730 ( 12.9 mi )
• Statistics Tutor
• Male Age 39
• Member Since: 11/2015
• Will travel up to 20 miles
• Rates from $25.00 to $35.00 /hr
• On-line In-Home In-Group
• Also Tutors: Algebra III, Math, Trigonometry, Pre-Calculus, Pre-Algebra, Logic, Geometry, College Algebra, Calculus II, Calculus I, Algebra II, Algebra I
More About This Tutor...
Medi-Cal Biller 02/2014 - 05/2015 & 09/2015 – Present Owl Western pharmacy o Create and maintain a new ways to save money and reduce the cost. o Solve any problem to enhance the performance of the
pharmacy. o Create very good relations with the customers. o Updating the statues of the patients in the facilities. o Adding the insurances for the patients & bill them. o Make priors authorizations
for the rejected claims. Project Engineer 05/2014 - 09/2015 Electracorp, Inc. o Managing, editing and adding all the drawing by "Bluebeam Revu CAD" o Reading all the cut sheets from the vendors,
creating the submittals. o Create the RFI's "Request for Information", Submittals, Releases, etc and sent it to the General Contractor. o Create the BOM "Bill of Materials", a material release logs.
o Follow up the RFI's, Submittals, Releases, etc and maintain a log for every single thing. o Cr ... read more
• Rancho Cucamonga, CA 91730 ( 12.9 mi )
• Statistics Tutor
• Male Age 34
• Member Since: 11/2013
• Will travel up to 5 miles
• Rates from $40.00 to $47.00 /hr
• Also Tutors: Math, Trigonometry, Pre-Calculus, Pre-Algebra, Geometry, Calculus II, Calculus I, Algebra II, Algebra I
PRINCIPLE IN SCHOOL IN INDIA. I HAVE EXPERIENCE FOR EXTENSION LECTURE IN COLLEGE. ALSO THREE YEAR EXPERIENCE FOR TE3ACHING 11TH GRADE NAD 12TH GRADE IN NON MEDICAL SUBJECTS.
Joshua T.
• Rancho Cucamonga, CA 91737 ( 15.4 mi )
• Statistics Tutor
• Male Age 29
• Member Since: 07/2018
• Will travel up to 20 miles
• Rates from $23.00 to $51.00 /hr
• On-line In-Home In-Group
• Also Tutors: Math, Trigonometry, Pre-Calculus, Pre-Algebra, Logic, Geometry, College Algebra, Calculus II, Calculus I, Algebra II, Algebra I
B.S. in Biochemistry, Cal Poly San Luis Obispo
B.S. in Biochemistry, California Polytechnic State University at San Luis Obispo Four years research lab experience, three years in-depth sales knowledge, experience working within a team comprised
of varying professional backgrounds and enthusiastic to utilize my experience to further benefit the growth of the company. Laboratory experience: Cellular Biology, Molecular Biology, Microbiology,
Protein Techniques, Metabolism, Organic Chemistry I-III, Biochemical Principles, Biomedical Ethics, Chemistry of Drugs and Poisons, Quantitative Analysis. Cell culture maintenance, micro pipetting,
micro-centrifugation, cell counting, bright field microscopy, southern blotting, western blotting, biochemical testing, differential staining, microbial plating, media tests, dilutions, titrations,
antimicrobial testing, PCR, RT-PCR, qPCR, gel electrophoresis, Gel Doc, SDS-PAGE, and isolation, pur ... read more
Mikhail S.
• Chino Hills, CA 91709 ( 16.4 mi )
• Expert Statistics Tutor
• Male Age 31
• Member Since: 09/2014
• Will travel up to 20 miles
• Rates from $35.00 to $40.00 /hr
• On-line In-Home In-Group
• Also Tutors: Algebra III, Math, Trigonometry, Pre-Calculus, Pre-Algebra, Logic, Geometry, College Algebra, Calculus II, Calculus I, Algebra II, Algebra I
Highly Experience Math Tutor
I hold a Bachelor's Degree in Mathematics from UC Irvine. I have been tutoring for over 5 years and understand what it takes for students to learn and fully comprehend mathematics. My approach to
math has always been to treat it like a language and an art, not just a science. This has made math not only rewarding, but also enjoyable. With my mathematics abilities and knowledge, your student
will acquire new skills in problem solving and critical thinking.
I first assess where your child is having difficulties and plan a coursework portfolio filled with multiple worksheets and problems for them to work on. I help them finish and understand all homework
and can assign supplemental assignments upon the parent's request. I make sure that the student is fully comfortable with all of the fundamentals and topics before moving on.
Rahim F.
• Chino Hills, CA 91709 ( 16.4 mi )
• Statistics Tutor
• Male Age 66
• Member Since: 09/2020
• Will travel up to 20 miles
• Rates from $35.00 to $50.00 /hr
• I am a Certified Teacher
• On-line
• Also Tutors: Algebra III, Math, Trigonometry, Pre-Calculus, Pre-Algebra, Logic, Geometry, College Algebra, Calculus II, Calculus I, Algebra II, Algebra I
More About This Tutor...
30 years of teaching experience at the college and university level. I have a great success rate over the years with great command of the language and topics. I am very familiar with different
technology applicable to teaching.I am currently using Zoom to teach my classes. I currently have availability on weekends as well.
Certified online professor
Alice M.
• Upland, CA 91786 ( 16.7 mi )
• Experienced Statistics Tutor
• Female Age 35
• Member Since: 11/2012
• Will travel up to 15 miles
• Rates from $15.00 to $30.00 /hr
• In-Home In-Group
• Also Tutors: Math, Algebra II, Algebra I
Read More About Alice M...
Patient Get to the point very quickly
I am very good at math. I am a problem solver and I can express my points clearly. I am currently in a math-related master degree program. I have tutored in my undergraduate periods for 6 months. I
helped the girl to achieve 25%~30% enhancement in math performance. Also I speak native Chinese. I can help you learn another language.
Sneha P.
• Upland, CA 91784 ( 17.8 mi )
• Statistics Tutor
• Female Age 30
• Member Since: 11/2012
• Will travel up to 20 miles
• Rates from $15.00 to $30.00 /hr
• On-line In-Home In-Group
• Also Tutors: Math, Pre-Algebra, Calculus II, Calculus I, Algebra II, Algebra I
A Little Bit About Me...
HI! My name is Sneha and I am a first year molecular biology major at the University of California, Berkeley. I just graduated from Upland High School as part of the top ten students of my class, and
I am very experienced in the subjects I have listed on my page. I am excited to share my knowledge with my students!
I prefer a one-on-one teaching style where I can utilize oust side resources to truly help students comprehend a concept.
Steve G.
• Anaheim, CA 92808 ( 17.9 mi )
• Statistics Tutor
• Male
• Member Since: 04/2012
• Will travel up to 30 miles
• Rates from $39.50 to $49.50 /hr
• On-line In-Home In-Group
• Also Tutors: Algebra III, Math, Trigonometry, Pre-Calculus, Pre-Algebra, Geometry, College Algebra, Calculus II, Calculus I, Algebra II, Algebra I
A Little Bit About Me...
I am a registered professional civil engineer in California and Texas with more than 20 years real world civil engineering experiences. I have M.S. and B.S. degrees in Civil Engineering. I enjoyed
working with young people. I have taught and trained many interns and young engineers and have helped them to pass EIT and P.E. examinations.
My teaching style has been formulated since I was in college as a Teaching Assistant. I have found that although by following some examples to solve one or two problems may be a quick way to achieve
a short term success, however, the students would forget them quickly and when the situations have some minor changes, they may not be able to solve the problem by themselves again. I would like to
teach them the theory behind the problems and to make sure that the students have a full understanding of the subject. This is my personal experience. ... read more
Ryan J.
• Menifee, CA 92584 ( 19.8 mi )
• Statistics Tutor
• Male Age 44
• Member Since: 04/2013
• Will travel up to 20 miles
• Rates from $25.00 to $40.00 /hr
• In-Home In-Group
• Also Tutors: Algebra III, Math, Trigonometry, Pre-Calculus, Pre-Algebra, Geometry, College Algebra, Algebra II, Algebra I
More About This Tutor...
I adapt my teaching to the students' learning style.
I have been tutoring math and science since I was a sophomore in high school. I thoroughly enjoy seeing my students succeed in both their academic and personal lives. I was an AVID tutor for three
years, the most requested math tutor at Mount San Jacinto College for five years and a private tutor all that time.
Here is Some Good Information About Statistics Tutoring
Worried about passing your statistics class? Even if you think you will never get it, we know we have the right tutor to help you! Our statistics tutors are experts in math and specialize in helping
students like you understand Stats. If you are worried about an upcoming test or fear not passing your class for the term, getting a statistics tutor will make all the difference. Don't wait to get
the help you need, our statistics tutors are available now.
Cool Facts About Tutoring Near Riverside, California
As of the 2010 census, in Riverside there were 91,850 households, out of which 43.0% have children under the age of 18 living with them. Need help? Select a tutor now. Planning on going to college?
We have the best subject and test preparation tutors in the entire Riverside area. The individual attention of a Riverside tutor can make all the difference in your report card and your confidence.
Find Riverside, CA Tutoring Subjects Related to Statistics
Riverside, CA Statistics tutoring may only be the beginning, so searching for other tutoring subjects related to Statistics will expand your search options. This will ensure having access to
exceptional tutors near Riverside, CA Or online to help with the skills necessary to succeed.
Consider Other Statistics Tutoring Neighborhoods Near Riverside, CA
Our highly qualified Statistics Tutoring experts in and around Riverside, CA are ready to get started. Let's move forward and find your perfect Statistics tutor today.
Search for Riverside, CA Statistics Tutors By Academic Level
Academic Levels for Riverside, CA Statistics Tutoring are varied. Whether you are working below, on, or above grade level TutorSelect has the largest selection of Riverside, CA Statistics tutors for
you. Catch up or get ahead with our talented tutors online or near you.
Find Local Tutoring Help Near Riverside, CA
Looking for a tutor near Riverside, CA? Quickly get matched with expert tutors close to you. Scout out tutors in your community to make learning conveniently fit your availability. Having many
choices nearby makes tutoring sessions easier to schedule.
Explore All Tutoring Locations Within California
Find Available Online Statistics Tutors Across the Nation
Do you need homework help tonight or weekly sessions throughout the school year to keep you on track? Find an Online Statistics tutor to be there when you need them the most.
Search for Online Statistics Tutors
|
{"url":"https://www.tutorselect.com/find/riverside_ca/statistics/tutors","timestamp":"2024-11-11T11:33:03Z","content_type":"text/html","content_length":"115895","record_id":"<urn:uuid:5ea871a1-d9d3-4c46-9dfd-755a822c0c86>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00565.warc.gz"}
|
Mortgage Recast Calculator
A mortgage recast calculator is a specialized tool that allows homeowners to estimate the financial benefits of recasting their mortgage. Recasting, also known as re-amortizing, is an
often-overlooked option that can potentially save homeowners thousands of dollars in interest over the life of a loan. Unlike refinancing, a recast doesn’t require obtaining a new loan but involves
making a lump-sum payment toward the mortgage principal and then recalculating the monthly payments based on the remaining balance.
Understanding how a mortgage recast calculator works, what factors influence its results, and how recasting compares to other mortgage management options are essential for any homeowner looking to
optimize their financial strategy. This Tech Bonafide article will explore these areas in detail, including the advantages, disadvantages and common scenarios for using a mortgage recast calculator.
Mortgage Recast
Before diving into the specifics of a mortgage recast calculator, it’s important to have a clear understanding of what a mortgage recast is.
A mortgage recast is when a borrower makes a substantial lump-sum payment toward their remaining mortgage principal and the lender recalculates the loan’s monthly payments based on the new,
reduced balance.
The interest rate, loan term and other conditions remain the same, but since the principal balance is lower, the monthly payments are reduced. This is different from refinancing, where the borrower
takes out a new loan with different terms, interest rates and often new fees.
Working of Mortgage Recast Calculator
A mortgage recast calculator allows homeowners to input specific variables related to their loan and potential lump-sum payment. The calculator then estimates how much the monthly payments will
decrease and how much interest savings can be realized over the remaining term of the loan.
Here are the basic components that are typically included in a mortgage recast calculator:
Key Components
1. Current Loan Balance: The first thing a mortgage recast calculator requires is the remaining loan balance. This is how much you still owe on your mortgage. For instance, if you’ve been paying a
$300,000 mortgage for several years and have $200,000 left, that is your current loan balance. This number provides the foundation for calculating potential savings from a recast.
2. Interest Rate: The interest rate is the cost of borrowing the money and is crucial in determining how much interest you’ll save by recasting the loan. Mortgage recasts only work with fixed-rate
loans since adjustable-rate mortgages (ARMs) have fluctuating rates, which would make recasting calculations unreliable.
For example, if your mortgage has an interest rate of 4%, that rate remains constant both before and after the recast. However, since the principal balance is reduced after the lump-sum payment, the
amount of interest you pay each month will decrease as well.
3. Monthly Payment: Your current monthly payment is the amount you’re required to pay toward the mortgage, which includes both principal and interest. A mortgage recast calculator uses this figure
to show how much it can drop post-recast.
For example, let’s say your current monthly payment is $1,500. After entering your lump-sum payment and other details, the calculator may show that your new monthly payment would be $1,200,
reflecting the reduction in the principal amount.
4. Remaining Loan Term: The remaining loan term is the time left until the loan matures. For example, if you originally took out a 30-year mortgage and have already paid it for 10 years, the
remaining loan term is 20 years.
The remaining term is vital for a mortgage recast calculator because the longer you have left on your loan, the more interest savings you can potentially realize from making a lump-sum payment now.
The new monthly payment is recalculated based on the existing term, not extended or shortened, which keeps the loan’s maturity date the same.
5. Lump-Sum Payment: This is the extra amount of money you’re willing to pay upfront toward the principal. In a mortgage recast, this lump-sum payment can come from various sources, such as a tax
refund, bonus, inheritance or savings. For instance, if you decide to pay an extra $20,000 on your mortgage principal, the recast calculator will adjust the monthly payment based on this reduced
6. New Monthly Payment: After entering all the above variables, the recast calculator will provide a new estimated monthly payment. This number reflects how much your mortgage payment will decrease
as a result of the lump-sum payment.
For example, after a $20,000 payment on a $200,000 mortgage, your new monthly payment might drop from $1,500 to $1,350, depending on your interest rate and loan term. This reduced monthly payment can
free up cash flow for other financial goals.
7. Interest Savings: The most significant benefit of a mortgage recast is the potential interest savings. By paying down the principal balance earlier than scheduled, you reduce the amount on which
interest is calculated, which can lead to substantial savings over time.
The mortgage recast calculator will show how much interest you’ll save by comparing the total interest paid over the life of the loan before and after the recast. For instance, recasting could save
you $15,000 to $30,000 in interest, depending on the loan amount, interest rate and time remaining.
FYI: An air loan is a deceptive form of mortgage fraud where brokers fabricate both a borrower and a property. By creating false transactions, they mislead lenders into believing they are
financing a legitimate purchase, allowing the fraudsters to illicitly profit.
Benefits of Using a Mortgage Recast Calculator
Using a mortgage recast calculator has several advantages for homeowners looking to make an informed decision:
1. Financial Transparency: The calculator offers a transparent view of how a lump-sum payment affects your mortgage, helping you assess whether recasting is a worthwhile strategy. It gives clear
insights into the new monthly payments, the amount of interest saved and how much your overall financial situation could improve.
2. Easy Decision-Making: Homeowners can use the results to determine if recasting is financially beneficial compared to other strategies like refinancing or investing the lump-sum payment elsewhere.
With accurate figures from the calculator, you can make decisions aligned with your financial goals.
3. Improved Cash Flow: One of the primary reasons homeowners choose to recast is to lower their monthly payments, freeing up cash flow for other expenses, investments or savings. The calculator
helps you understand exactly how much extra money you could have each month after a recast.
4. No Need for Refinancing: Recasting offers a simpler, less expensive alternative to refinancing. Refinancing often comes with closing costs, fees and the hassle of qualifying for a new loan.
Recasting allows you to reduce your payments without changing the terms of your existing loan, making it a convenient option for homeowners who want to avoid the complexities of refinancing.
Disadvantages of Mortgage Recasting
While recasting can be a great tool for some homeowners, it’s not for everyone. Here are some potential drawbacks:
1. Large Lump-Sum Payment Required: A mortgage recast requires a significant upfront payment, typically $5,000 or more. Not all homeowners have this amount readily available and for some, putting
that money into other investments or financial goals may make more sense.
2. No Change in Loan Term or Interest Rate: Unlike refinancing, a recast doesn’t reduce your interest rate or shorten your loan term. If your goal is to pay off your mortgage sooner or take
advantage of lower rates, refinancing may be a better option.
3. Not Available for All Loans: Some loans, particularly government-backed loans like FHA or VA loans, may not offer recasting as an option. Additionally, some lenders may charge a fee for
recasting, typically ranging from $200 to $500, which should be factored into your decision.
4. Tied-up cash: The lump sum is no longer accessible once applied toward the mortgage principal. You can’t easily access these funds unless through a home equity loan or selling the property.
Mortgage Recast Vs Refinancing
Here is a comparison between Recast and Refinancing:
Feature Mortgage Recast Refinancing
Purpose Lower monthly payments by paying down principal Replace the current loan with one that has better terms
Loan Term Remains the same Can be extended or shortened
Interest Rate Stays the same Can be lowered (if refinancing to a lower rate)
Costs Small recast fee ($200-$500) Closing costs, application fees, etc. (~2-5% of loan)
Credit Check Not required Required
Eligibility Requires lump-sum payment, not available for FHA/VA loans Credit score, income, and home appraisal are critical for approval
Monthly Payment Reduction Based on new principal balance Depends on new loan terms and interest rate
Upfront Lump Sum Required (minimum $5,000 or more) Not required
Availability Limited to lenders that allow recasting Widely available
Application of Mortgage Recast Calculator
There are several situations where using a mortgage recast calculator can be especially beneficial:
1. Windfall Payments: If you’ve received a large sum of money, such as a tax refund, bonus or inheritance, a recast can help you put that money to work by reducing your monthly payments and overall
interest costs.
2. Downsizing: Homeowners who sell a property and purchase a new home with a smaller mortgage can use a recast to reduce their monthly payments without refinancing.
3. Increased Cash Flow Needs: If your financial situation changes and you need to free up cash flow, a recast can help lower your monthly payments without the hassle of refinancing.
4. After selling an investment property or downsizing your home and having extra cash to pay toward the mortgage.
5. When you’re preparing for retirement and want to reduce financial burdens by lowering monthly housing expenses.
When Should You Recast a Mortgage?
1. You have extra cash: If you come into a large sum of money and would prefer to reduce monthly payments without the hassle of refinancing.
2. You want to lower monthly payments: Recasting lowers your payments while keeping the original loan term.
3. You’re happy with your current interest rate: If the current interest rate is favorable, recasting is ideal as it leaves the rate unchanged.
4. You want a cost-effective option: Recasting is generally cheaper than refinancing since it avoids high closing costs.
Input & Output Breakdown Table
Input Variable Description
Current Loan Balance The remaining mortgage principal before the lump-sum payment.
Current Monthly Payment The amount you currently pay monthly for the mortgage.
Interest Rate The fixed interest rate on the mortgage loan.
Lump-Sum Payment The additional amount you intend to pay to reduce the loan’s principal.
Remaining Loan Term The time left on the mortgage (e.g., 15, 20, or 25 years).
Output Variable Description
New Monthly Payment The updated monthly mortgage payment after applying the lump sum.
Interest Savings The total amount saved in interest payments over the life of the loan post-recast.
Total Payment Savings Combined savings on interest and principal repayments after recasting.
A mortgage recast calculator is a valuable tool for homeowners considering making a lump-sum payment toward their mortgage. It provides clear insights into how much their monthly payments will
decrease and how much interest they can save over the life of their loan. While recasting is not for everyone, it can be a great option for those with a significant lump-sum payment who want to lower
their monthly payments without the complexities and costs of refinancing. By using a mortgage recast calculator, homeowners can make informed decisions that align with their financial goals.
0 Comments
Inline Feedbacks
View all comments
|
{"url":"https://techbonafide.com/mortgage-recast-calculator/","timestamp":"2024-11-05T15:01:38Z","content_type":"text/html","content_length":"215448","record_id":"<urn:uuid:852abb13-9eb1-4f11-aee2-179de58e0233>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00405.warc.gz"}
|
Convert Attohertz to Wavelength In Kilometres
Please provide values below to convert attohertz [aHz] to wavelength in kilometres, or vice versa.
Attohertz to Wavelength In Kilometres Conversion Table
Attohertz [aHz] Wavelength In Kilometres
0.01 aHz 2.99792458E+25 wavelength in kilometres
0.1 aHz 2.99792458E+24 wavelength in kilometres
1 aHz 2.99792458E+23 wavelength in kilometres
2 aHz 1.49896229E+23 wavelength in kilometres
3 aHz 9.9930819333333E+22 wavelength in kilometres
5 aHz 5.99584916E+22 wavelength in kilometres
10 aHz 2.99792458E+22 wavelength in kilometres
20 aHz 1.49896229E+22 wavelength in kilometres
50 aHz 5.99584916E+21 wavelength in kilometres
100 aHz 2.99792458E+21 wavelength in kilometres
1000 aHz 2.99792458E+20 wavelength in kilometres
How to Convert Attohertz to Wavelength In Kilometres
wavelength in kilometres =
aHz = 2.99792458E+23
wavelength in kilometres
Example: convert 15 aHz to wavelength in kilometres:
15 aHz = 2.99792458E+23 / 15 = 1.9986163866667E+22 wavelength in kilometres
Convert Attohertz to Other Frequency Wavelength Units
|
{"url":"https://www.unitconverters.net/frequency-wavelength/attohertz-to-wavelength-in-kilometres.htm","timestamp":"2024-11-07T19:32:28Z","content_type":"text/html","content_length":"10419","record_id":"<urn:uuid:947dc66c-e7b4-4d4c-814d-922702ba1919>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00707.warc.gz"}
|
Datatähti Open 2019
• Time limit: 1.00 s
• Memory limit: 512 MB
A binary tree has n levels and the tree is perfect, i.e., each node has two children (except for the leaves). The nodes of the tree are numbered so that the root is 1 and the left and right children
of a node k are 2k and 2k+1. In addition, the tree has m forbidden edges that we may not traverse.
We construct a graph based on the tree by mirroring it in the leaves. How many paths are there such that the path starts and ends in the root of the original tree, it contains at least one edge and
does not traverse any edge more than once?
For example, the following graph has 12 such paths:
The first input line has two integers n and m: the height of the tree and the number of forbidden edges.
After this, there are m lines that contain integers a_1, a_2, \dots, a_m. An integer a_i indicates that we may not use the edge between the nodes a_i and \left \lfloor{\frac{a_i}{2}}\right \rfloor.
Each such edge is given only once.
Print one integer: the number of distinct paths modulo 10^9+7.
The image corresponds to the example. The forbidden edges are red.
Subtask 1 (23 points)
Subtask 2 (26 points)
Subtask 3 (51 points)
• 2 \leq n \leq 60
• 0 \leq m \leq 10^5
|
{"url":"https://cses.fi/231/task/D","timestamp":"2024-11-12T09:39:02Z","content_type":"text/html","content_length":"8177","record_id":"<urn:uuid:f9feba7a-ff78-460b-9bdf-5d209ba48c0a>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00357.warc.gz"}
|
Minimizing message size in stochastic communication patterns: fast self-stabilizing protocols with 3 bits
This paper considers the basic PULL model of communication, in which in each round, each agent extracts information from few randomly chosen agents. We seek to identify the smallest amount of
information revealed in each interaction (message size) that nevertheless allows for efficient and robust computations of fundamental information dissemination tasks. We focus on the Majority Bit
Dissemination problem that considers a population of n agents, with a designated subset of source agents. Each source agent holds an input bit and each agent holds an output bit. The goal is to let
all agents converge their output bits on the most frequent input bit of the sources (the majority bit). Note that the particular case of a single source agent corresponds to the classical problem of
Broadcast (also termed Rumor Spreading). We concentrate on the severe fault-tolerant context of self-stabilization, in which a correct configuration must be reached eventually, despite all agents
starting the execution with arbitrary initial states. In particular, the specification of who is a source and what is its initial input bit may be set by an adversary. We first design a general
compiler which can essentially transform any self-stabilizing algorithm with a certain property (called “the bitwise-independence property”) that uses ℓ-bits messages to one that uses only log ℓ-bits
messages, while paying only a small penalty in the running time. By applying this compiler recursively we then obtain a self-stabilizing Clock Synchronization protocol, in which agents synchronize
their clocks modulo some given integer T, within O~ (log nlog T) rounds w.h.p., and using messages that contain 3 bits only. We then employ the new Clock Synchronization tool to obtain a
self-stabilizing Majority Bit Dissemination protocol which converges in O~ (log n) time, w.h.p., on every initial configuration, provided that the ratio of sources supporting the minority opinion is
bounded away from half. Moreover, this protocol also uses only 3 bits per interaction.
Bibliographical note
Publisher Copyright:
© 2018, Springer-Verlag GmbH Germany, part of Springer Nature.
ASJC Scopus subject areas
• Theoretical Computer Science
• Hardware and Architecture
• Computer Networks and Communications
• Computational Theory and Mathematics
Dive into the research topics of 'Minimizing message size in stochastic communication patterns: fast self-stabilizing protocols with 3 bits'. Together they form a unique fingerprint.
|
{"url":"https://cris.haifa.ac.il/en/publications/minimizing-message-size-in-stochastic-communication-patterns-fast-2","timestamp":"2024-11-14T02:15:12Z","content_type":"text/html","content_length":"58350","record_id":"<urn:uuid:9240cb46-d648-47a2-a872-65fab0f0330b>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00438.warc.gz"}
|
Fixed Asset Turnover Ratio: Definition, Formula & Calculation
It would not make sense to compare the asset turnover ratios for Walmart and AT&T, since they operate in different industries. Comparing the relative asset turnover ratios for AT&T with Verizon may
provide a better estimate of which company is using assets more efficiently in that sector. From the table, Verizon turns over its assets at a faster rate than AT&T.
And since both of them cannot be negative, the fixed asset turnover can’t be negative. Also, a high fixed asset turnover does not necessarily mean that a company is profitable. A company may still be
unprofitable with the efficient use of fixed assets due to other reasons, such as competition and high variable costs. Calculate both companies’ fixed assets turnover ratio based on the above
information. Also, compare and determine which company is more efficient in using its fixed assets. Let us see some simple to advanced examples of formula for fixed asset turnover ratio to understand
them better.
The asset turnover ratio uses total assets instead of focusing only on fixed assets. Using total assets reflects management’s decisions on all capital expenditures and other assets. The fixed asset
turnover ratio is most useful in a “heavy industry,” such as automobile manufacturing, where a large capital investment is required in order to do business. In other industries, such as software
development, the fixed asset investment is so meager that the ratio is not of much use. The formula to calculate the total asset turnover ratio is net sales divided by average total assets. The asset
turnover ratio is calculated by dividing the net sales of a company by the average balance of the total assets belonging to the company.
Fixed assets such as property or equipment could be sitting idle or not being utilized to their full capacity. Conversely, if a company has a low asset turnover ratio, it means it is not efficiently
using its assets to create revenue. Once this same process is done for each year, we can move on to the fixed asset turnover, where only PP&E is included rather than all the company’s assets.
The asset turnover ratio, on the other hand, consider total assets, which includes both current and non-current assets. As different industries have different mechanics and dynamics, they all have a
different good fixed asset turnover ratio. For example, a cyclical company can have a low fixed asset turnover during its quiet season but a high one in its peak season. Hence, the best way to assess
this metric is to compare it to the industry mean. This article will help you understand what is fixed asset turnover and how to calculate the FAT using the fixed asset turnover ratio formula. A
higher turnover ratio indicates greater efficiency in managing fixed-asset investments.
1. The Asset Turnover Ratio is a financial metric that measures the efficiency at which a company utilizes its asset base to generate sales.
2. An asset turnover ratio equal to one means the net sales of a company for a specific period are equal to the average assets for that period.
3. The asset turnover ratio measures the value of a company’s sales or revenues relative to the value of its assets.
The company’s average total assets for the year was $4 billion (($3 billion + $5 billion) / 2 ). For every dollar in assets, Walmart generated $2.51 in sales, while Target generated $1.98. Target’s
turnover could indicate that the retail company was experiencing sluggish sales or holding obsolete inventory.
You can also check out our debt to asset ratio calculator and total asset turnover calculator to understand more about business efficiency. A company with a higher FAT ratio may be able to generate
more sales with the same amount of fixed assets. A system that began being used during the 1920s to evaluate divisional performance across a corporation, DuPont analysis calculates a company’s return
on equity (ROE). It breaks down ROE into three components, one of which is asset turnover. The asset turnover ratio is expressed as a rational number that may be a whole number or may include a
decimal. By dividing the number of days in the year by the asset turnover ratio, an investor can determine how many days it takes for the company to convert all of its assets into revenue.
Accelerated Depreciation
A company’s asset turnover ratio in any single year may differ substantially from previous or subsequent years. Investors should review the trend in the asset turnover ratio over time to determine
whether asset usage is improving or deteriorating. The asset turnover ratio can vary widely from one industry to the next, so comparing the ratios of different sectors like a retail company with a
telecommunications company would not be productive. Comparisons are only meaningful when they are made for different companies within the same sector. This is because the fixed asset turnover is the
ratio of the revenue and the average fixed asset.
Low vs. High Asset Turnover Ratios
It can be used to compare how a company is performing compared to its competitors, the rest of the industry, or its past performance. An asset turnover ratio equal to one means the net sales of a
company for a specific period are equal to the average assets for that period. The company generates $1 of sales for every dollar the firm carries in assets. It is the gross sales from a specific
period less returns, allowances, or discounts taken by customers. When comparing the asset turnover ratio between companies, ensure the net sales calculations are being pulled from the what is
product cost same period. A company will gain the most insight when the ratio is compared over time to see trends.
Thus, to calculate the asset turnover ratio, divide net sales or revenue by the average total assets. One variation on this metric considers only valuation and modelling a company’s fixed assets (the
FAT ratio) instead of total assets. Thus, it helps to assess how well the company’s long term investments are able to bring adequate returns for the business. The fixed asset turnover ratio measures
a company’s efficiency and evaluates it as a return on its investment in fixed assets such as property, plants, and equipment.
|
{"url":"http://vcpbaseball.com/2023/12/01/fixed-asset-turnover-ratio-definition-formula/","timestamp":"2024-11-14T02:14:25Z","content_type":"text/html","content_length":"308996","record_id":"<urn:uuid:4dace23b-9513-4148-851f-ae2b853a3e0c>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00037.warc.gz"}
|
Week practice in the fifth week of the second semester of the 21st year (review topic of dynamic planning)
A - keep passing the ball:
In PE class, Xiaoman's teacher often plays games with his classmates. This time, the teacher took the students to play a passing game. The rules of the game are as follows: n students stand in a
circle, and one of them holds a ball. When the teacher blows the whistle, he starts to pass the ball. Each student can pass the ball to one of his left and right students (left and right). When the
teacher blows the whistle again, the passing stops. At this time, the student who doesn't pass the ball is the loser, I'm going to give you a show. The clever Xiaoman asks an interesting question:
how many different passing methods can make the ball that Xiaoman starts to pass, and then returns to Xiaoman's hand after passing it m times. The two methods of passing the ball are regarded as
different methods, if and only if in the two methods, the sequence of students receiving the ball is different according to the order of receiving the ball. For example, there are three students, No.
1, No. 2 and No. 3. Assuming that Xiaoman is No. 1, there are two ways for the ball to return to Xiaoman after three passes: 1 - > 2 - > 3 - > 1 and 1 - > 3 - > 2 - > 1.
Enter the file ball In is a line with two integers n, m separated by spaces (3 < = n < = 30, 1 < = m < = 30).
Output file ball Out is a line with an integer indicating the number of methods that meet the meaning of the question.
40% of the data meet: 3 < = n < = 30, 1 < = m < = 20, 100% of the data meet: 3 < = n < = 30, 1 < = m < = 30
Sample Input
Sample Output
Problem analysis:
My first thought is actually to use violent search to simulate. The idea is the most direct, but there is a great chance that it will time out, but I tried it and it did time out; Then we need to
change another way of thinking. The second idea is to use dynamic programming, dp[m][n] represents the transfer method to reach the nth position under m operations. According to previous experience,
we need to summarize the recursive formula first. The nth position can be reached at the m-1 operation, which means that the position of the ball must be on the left and right of n, Then the small
ball can be transferred from left to right to position n, so the recurrence formula is:
using namespace std;
typedef long long ll;
int n,m;
int dp[32][32];
int main(){
for(int i=1;i<=m;i++){
for(int j=1;j<=n;j++){
else if(j==n){
else {
Queen B - glori:
Nubia and Sudan had no children, so he had to choose one of the qualified successors to succeed to the throne. He wanted the successor to be smart enough, so he prepared a checkerboard with a number
of 1 − 99 in each grid. He also prepared eight Queen pieces. The rule of Queen 8 is that there can be no pieces in the same row, column or slash. While meeting this rule, the heir to the throne also
needs to make the sum of the numbers of the positions of the eight queens the largest.
Input format
Enter a number k(k ≤ 20) to represent the number of chessboards. Next, there are k chessboards. Each chessboard has 64 numbers, which are divided into 8 rows and 8 columns. For details, see the
example. Each number is less than 100.
Output format
Each chessboard outputs the maximum value, a total of k lines.
Sample Input
Sample Output
Problem analysis:
In fact, it is a replica of the eight queens' question. My idea is a little violent, that is, to find out each arrangement and maintain a maximum value in the process. However, I made a small
question when writing and forgot to reset MAX, resulting in the wrong answer.
#include <bits/stdc++.h>
using namespace std;
int fs[100], bs[100], c[100], r[100], m[100][100],a[10][10];
int MAX;
int k;
void out(){
int ans=0;
memset(m, 0, sizeof(m));
for(int i = 1; i <= 8; i++){
for(int j = 1; j <= 8; j++)
if(r[j] == i) ans+=a[i][j];
void dfs(int x){
if(x > 8){
for(int i = 1; i <= 8; i++){
if(!c[i] && !fs[x + i] && !bs[x - i + 8]){
r[x] = i;
c[i] = 1, fs[x + i] = 1, bs[x - i + 8] = 1;
dfs(x + 1);
c[i] = 0, fs[x + i] = 0, bs[x - i + 8] = 0;
int main(){
for(int i=1;i<=8;i++){
for(int j=1;j<=8;j++){
return 0;
Topic background:
trs likes skiing. He came to a ski resort, which is a rectangle. For simplicity, we use the matrix of r rows and c columns to represent each terrain. In order to get faster speed, the taxiing route
must be inclined downward. For example, the rectangle in the sample can slide from a point to one of the four adjacent points up, down, left and right. For example, 24-17-16-1, in fact, 25-24-23...
3-2-1 is longer. In fact, this is the longest one.
Input format:
Line 1: two numbers r, c (1 < = r, c < = 100), representing the rows and columns of the matrix. Row 2... r+1: the number of c in each row represents this matrix.
Output format:
Only one line: output 1 integer, indicating the maximum length that can slide.
Sample Input
Sample Output
Problem analysis:
The key point of this problem is how to realize the access from the lowest point. First, put the data in a one-dimensional array. At the same time, each element of this array should be able to deduce
its position in the two-dimensional array, and then sort the one-dimensional array. I use a structure array that contains multiple sets of data.
using namespace std;
typedef long long ll;
int a[105][105];
int l[105][105];
struct node{
int x,y;
int data;
int cmp(node u,node v){
return u.data<v.data;
int n,m;
int main(){
node b[n*m];
for(int i=1;i<=n;i++)
for(int j=1;j<=m;j++){
for(int i=0;i<n*m;i++){
int c=0;
for(int x=1;x<=n;x++)
for(int y=1;y<=m;y++)
F - all mine:
There is a box with a capacity of v (positive integer, O ≤ v ≤ 20000) and n items (o ≤ n ≤ 30), and each item has a volume (positive integer). It is required to take any number of n items into the
box to minimize the remaining space of the box.
Input format:
The first line, an integer, represents the box capacity;
The second line, an integer, indicates that there are n items;
The next N lines represent the respective volumes of the n items.
Output format:
An integer representing the remaining space of the box.
Sample Input
Sample Output
Problem analysis:
Because the data is not large, at the beginning, my idea is still violent search. Of course, direct violence should not work, so I added some simple optimization later, which I didn't expect; Another
idea of this problem is dynamic programming, which is more suitable for situations with larger data.
//Violent search
using namespace std;
typedef long long ll;
int a[35];
int V;//Original volume
int N;//number
int MIN;
int f(int u,int v){
return u>v;
void dfs(int v,int i){
if(v<0)return ;
if(i>N)return ;
if(MIN==0)return ;
for(int j=i;j<N;j++){
int main(){
for(int i=0;i<N;i++)cin>>a[i];
int i;//Record the serial number of the first item that can be accommodated
//dynamic programming
using namespace std;
int V;
int N;
int a[35];
int dp[20005];//The subscript represents the volume that each volume can hold
int main(){
for(int i=0;i<N;i++)cin>>a[i];
for(int i=0;i<N;i++){
for(int j=V;j>=a[i];j-- )
cout<<V-dp[V]; //Output minimum volume
G - I have a pair of scissors:
There are n line segments on the coordinate axis. The left end point of each line segment is ai and the right end point is bi. Now you need to delete some segments so that the lower segment has no
common part except the endpoint. Please calculate the maximum number of segments that can be reserved.
Input format
The first line is an integer
n(1 ≤ n ≤ 10 ^ 6) indicates the number of line segments. Next N lines, two integers AI, Bi in each line (0 ≤ a < Bi ≤ 10 ^ 6)
Output format
An integer representing the maximum number of segments that can be reserved.
Sample Input
Sample Output
Problem analysis:
The main test point of this question is actually thinking. The original question takes the competition time as an example. I think the competition time is easier to understand. Now there are multiple
competitions. Each competition is based on its own start time and end time. Our goal is to participate in more competitions as much as possible. Let's first sort the end time of the competition, and
then list it in turn, You can choose the most suitable one; If you choose to sort the start time and then enumerate it in turn, in fact, one game may take too long to replace several games, and the
result is not our requirement.
using namespace std;
typedef long long ll;
struct node{
int l,r;
node a[1000005];
int cmp(node x,node y){
return x.r<y.r;
int main(){
int n;
for(int i=0;i<n;i++)cin>>a[i].l>>a[i].r;
int ans=1,cnt=a[0].r;
for(int i=1;i<n;i++){
|
{"url":"https://programming.vip/docs/6220a547e702d.html","timestamp":"2024-11-11T16:38:14Z","content_type":"text/html","content_length":"18065","record_id":"<urn:uuid:40453bc2-2f8a-4e1b-bbc0-6b3800466224>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00443.warc.gz"}
|
Calculus analogies
Consider these analogies for helping to understand key concepts in calculus.
In mathematics it is often useful to have different ways of viewing concepts in order to help to build up intuition; however we need to know to what extent analogies are mathematically trust-worthy.
Imagine the following
for analogies concerning calculus -- some might be good analogies, some might sometimes work and some might simply not work at all.
Consider and test these proposed analogies for understanding aspects of calculus carefully, applying them to several examples. Which analogies are largely sound, and which fail to work? Provide
examples of functions to exemplify your points.
Analogy: A curve is a road on a map
Imagine an analogy where the curve of a function represents a road drawn on a map. Imagine driving along this road, starting from the left (west)
1. Sign of the derivative of a function at each point
The derivative of the function is positive when travelling towards the north, negative when travelling towards the south.
2. Sign of the second derivative of a function at each point
If your steering wheel is turned clockwise from the neutral position then the second derivative is negative. If it is turned anticlockwise from the neutral position then the second derivative at that
point is positive.
3. Sign of the third derivative of a function at each point
If the steering wheel is in the process of turning in the anti clockwise direction then the third derivative is positive. If the steering wheel is in the process of turning in the clockwise direction
then the third derivative is negative.
4. Differentiability condition at each point
The function is differentiable at points on the road when is it possible to drive along smoothly without having to suddenly turn the steering wheel.
5. Points of inflection
Points of inflection occur at the points, and only the points, where the steering wheel passes through the neutral position.
Note on terminology
: The 'neutral position' is the position of the steering wheel in which the car travels forwards in a straight line. A clockwise turn from this position causes the car to turn right and an
anticlockwise turn from this position causes the car to turn left.
Getting Started
It can be useful to hold your hands as if holding a wheel and imagine driving along a road!
Don't forget to focus your attention on the main concepts in the problem and don't get distracted by questions such as what if the road is not flat etc. This is a mathematical idealisation!
Teachers' Resources
Why do this problem?
This fun problem
will hopefully prove incredibly useful to all students: having a sound geometrical visualisation for concepts in calculus is essential in any application beyond the simplest algebraic examples and
also proves very useful in checking that calculations make sense. It will also be very useful for uncovering misconceptions about calculus.
Possible approach
You might want to hand out these cards in Word 2003 or pdf format so that students can more easily consider the statements under discussion.
This need not be a long activity and can be used at any point in the curriculum where the concepts in any of the 5 analogies have been encountered. You can focus on a couple of the most relevant
analogies if desired.
You could simply set up the situation and let the students enter into discussion. Students can think about the ideas in small groups and sketch 'road maps' on which to test their ideas.
Alternatively, you can sketch a curve with, say, 4 turning points on the board and ask for a volunteer to model the motion of the imaginary steering wheel as you trace your finger along the curve.
Another volunteer can record the motion of the steering wheel, paying particular attention to the direction or speed of turn. You could then sketch a more 'demanding' road and repeat the exercise.
There are at least three levels of approach to this problem:
1) Once students are intuitively clear as to which analogies are largely reliable the lesson can move on and the analogies can be referred to as a guide throughout subsequent study of calculus.
2) Students can try to construct convincing justification that the analogies are sound, including some thought on when the analogies break down (i.e. what sorts of roads do the analogies work for,
and what sorts of 'pathological' roads do the examples not work for?)
3) Students might try to come up with some analogies of their own which others might test out. For example, other analogies for the sign of the gradient might involve mountains, valleys or hills.
Note that various misconceptions might be unearthed during this task, and many more advanced concepts in mathematics might be raised by students. See the possible support below for some of these.
Key questions
Who can drive a car? Who can describe the motion of a wheel through a journey?
Can you imagine driving along the road indicated on this map?
For what sorts of crazy curves might these analogies not work?
Can you give a clear justification for you results (using words or algebra)?
What can we say about a car which is moving due north at some point?
Possible extension
The key advanced extension is to try to create analogies for other concepts in calculus. This is very open ended, but will really get students thinking about calculus as the mathematics of rates of
Possible support
Some students, who equate mathematics with algebra, might struggle to see this as 'mathematics'. Reassure them that the visualisation practiced and the explanations constructed are a key part of
advanced mathematical thinking.
Some students, even the most traditionally 'able', might find the visualisation aspect of this problem extremely difficult. Such students need to be encouraged not simply to give up and to exercise
this part of their mathematical brain. Perhaps others in the group might try to explain the concepts to them?
Misconceptions or errors to look out for are:
1. The steeper the gradient the more the wheel needs to be turned
2. A function can be used to describe, say, a circle (No: A function is single valued)
3. A point of inflection must also be a stationary point (No: That is a stationary point of inflection)
Advanced concepts in mathematics which might be raised in some form are:
1. What is a function as opposed to a curve?
2. What is a continuous / differentiable function?
3. What is curvature?
4. Are there functions which are only twice differentiable?
|
{"url":"https://nrich.maths.org/problems/calculus-analogies","timestamp":"2024-11-05T11:58:27Z","content_type":"text/html","content_length":"44993","record_id":"<urn:uuid:3278e5b6-0522-49e5-9693-23149c8e4502>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00645.warc.gz"}
|