content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Subdivision surface techniques
Mohamad, lklima and Bade, Abdullah (2007) Subdivision surface techniques. In: Real-time computer graphics theory and application Vol I. Penerbit UTM , Johor, 95-110 . ISBN 978-983-52-0614-6
Full text not available from this repository.
In geometric modeling, we always want our geometric model to looks as realistic as the real world. In this respect, the smoothness characteristics always come first. Thus, what kind of technique
allows us to produce smooth modeling? The best approach is subdivision surface technique. In general, subdivision surface is a way to represent polygonal model is smoothly. In order to produce smooth
surfaces, our main interest is the curvature at the surface. Implementation of subdivision technique to 3D model is quite straightforward and fast (DeRose.T et al 1998).
Repository Staff Only: item control page | {"url":"http://eprints.utm.my/14080/","timestamp":"2024-11-14T21:03:49Z","content_type":"application/xhtml+xml","content_length":"17906","record_id":"<urn:uuid:da0f2c8c-5391-4fc2-af46-641227a6b02d>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00205.warc.gz"} |
Cantor space
Worth adding the coalgebraic description? What is it, the terminal coalgebra for the endofunctor on $Top$, $X \mapsto X + X$?
I’m interested in the last sentence of the Idea section. In what way is Cantor space used to construct the Peano curve?
I gave Cantor space more of an Idea-section. Then I expanded the discussion at As a subspace of the real line with full detail. The same discussion I also copied over to Tychonoff product in this
I added a bit more to Cantor space, including the abstract characterization up to homeomorphism (which was oddly missing, since what was there seemed to be leading right up to that point). While I
was at it, I created perfect space (with perfect set redirected to it).
I cross-linked Cantor space from (newly created) Examples-sections at topological space and locale and topology - contents
Eventually hopefully this sidebar for topology is expanded to something that reflects the scope of the relevant nLab articles
I created Cantor space to record its definition as a locale, but goodness knows there is no end to what might be written about it.
Todd, sorry, I should have been more explicit. Maybe I should write “may be used to neatly organize the construction” rather than “may be used to construct”: I am thinking of picking a continuous
surjection $Cantor \overset{t}{\to} Cantor \times Cantor$ (e.g. unshuffle), then observing that there is easily a continuous surjection $s \colon Cantor \to [0,1]$ and, with a tad more work, that
every continuous function from $Cantor$ to a linear space may be extended along the defining embedding $Cantor \hookrightarrow [0,1]$ (by linear interpolation). Then applying this extension to the
surjection $Cantor \to Cantor \overset{t}{\times} Cantor \overset{(s,s)}{\to} [0,1] \times [0,1]$ gives the desired continuous surjection.
Well, I’ll be. That’s rather nice, Urs. Never saw that before (and see nothing wrong with it).
David: that’s right.
Ok, I’ll put it in.
Being much taken with the simplicity of this Peano curve as sketched by Urs #7, I looked around and saw this is called the Lebesgue space-filling curve, which has another nice properties such as
being differentiable almost everywhere. It’s obviously similar in flavor to the Cantor-Lebesgue function.
Anyway, I went ahead and bashed out the construction at Peano curve, with a proof of continuity. It was just a quick and dirty job, which I may see about cleaning up later. Not many cross-links were
inserted. (It’s now bedtime for me.)
Thanks, Todd! That’s very nice. I didn’t know that Lebesgue’s name is associated with this.
I did some more jiggering with Peano curve, which then led me to add to Cantor space a proof of the Hausdorff-Alexandroff theorem, which says that every compact metric space is a continuous image of
Cantor space. | {"url":"https://nforum.ncatlab.org/discussion/1337/cantor-space/?Focus=62556","timestamp":"2024-11-11T08:30:28Z","content_type":"application/xhtml+xml","content_length":"55040","record_id":"<urn:uuid:d83ebfb1-c923-4d2c-942e-b6d04413f3b6>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00564.warc.gz"} |
Development of Roller Ends Forced-Contact Model and Cambering Technology for UCM Temper Mill (II)——Development of cambering technology for UCM temper mill
Development of Roller Ends Forced-Contact Model and Cambering Technology for UCM Temper Mill (II)——Development of cambering technology for UCM temper mill ()
1. Development of Cambering Technology for UCM Temper Mill
1.1. Target of Cambering
The intermediate roll should have proper shift for improving strip shape in the process of temper rolling. But the shift of intermediate roll will make the pressure distribution between rollers more
uneven. Peak distribution will appear and cause uneven abrasion of rollers. Strip shape and surface quality will become worse. Roll consumption will increase sharply. In addition, the contact of
working rolls outside the plate width will appear when thin and narrow strip is temper rolled. The forced-contact of working rolls will lead to that partial presetting rolling force is used to metal
deformation and others cause the roller ends squashed which lead to the actual elongation is smaller than design value. The product performance does not meet the user needs. Working roll consumption
will increase accordingly. Due to the requirements of customers for strip mechanical performance, strip shape and surface quality become higher and higher and the cost for rollers compressed
constantly as well, the problems above are bottleneck problems and the focus of technology research in the temper rolling process of thin and narrow strip. Therefore, the newly-built 1220 UCM temper
mill of Baosteel is taken as a research object, a model of roller ends forced-contact and a computational model of flatness which is suited for UCM temper mill have been established after lots of
field tracing and theoretical research. The following three targets are realized through the optimizing design of roll configuration: 1) good flatness of rolled strip; 2) without forced-contact of
working rolls; 3) The pressure distribution between rollers caused by the shift of middle roll become more homogeneous. Side effects such as peak are disappeared. Working life of roll is improved
effectively [1].
1.2. Development of Cambering Model for UCM Temper Mill
From the metal deformation model, the strip shape of the first and second stand in exit can be expressed with the following formulas for two-stand UCM temper mill:
Similarly, from the model for elastic deformation of rolls mentioned in 2.2, the transverse distribution value of strip exit thickness in the first and second stand
During the optimizing process of cambering, if the actual datas of
It should be noted that the pressure between intermediate roll and working roll and the pressure between intermediate roll and back-up roll in the first stand have nothing with the working roll shape
and intermediate roll shape of the second stand. For the convenience of expression, the form of Equations (37) and (38) are similar to Equations (39) and (40)
Actually, according to the production characteristics of two-stand UCM temper mill, the working roll configuration curves and back-up roll configuration curves of two stands are often designed to the
same curve. And then, Equations (40) to (45) as follows:
Based on field experiments and theoretical research, shown as Figure 3, the form of working roll configuration curve is shown in the following Equation (52) [3]:
In Equation (52), the cosine subentry is used to control edge wave, the high order curve subentry is used to control roll forced-contact. Obviously, it is easy to describe working roll configuration
curve shown in Figure 3 and Equation (52) if
Allow for the actual working ability of grinder, in practice, the intermediate roll configuration curve can be expressed with the following Equation (53) (as shown in Figure 4):
So, relevant roll profile parameters can be expressed with
Figure 3. The Schematic diagram of working roll curve.
Figure 4. The Schematic diagram of intermediate roll curve.
In this way, on condition that the parameters of mental deformation model and roller system model such as rolling force and tension etc. are known, the optimizing objective function of working roll
profile and intermediate roll profile can be expressed with the following formulas for a single specification product and the equipment and production craft characters of UCM temper mill [4]:
Obviously, the optimal setting of working roll configuration curve and intermediate roll configuration curve is aimed at improving strip shape quality and managing roll forced-contact at the same
time, making the pressure distribution between rollers become more homogeneous, reducing the peak value of pressure between rolls, improving working life of rollers. It should be noted that
production site, in practical, often m kinds of specifications are choosed for optimization, and according to the portion of total amount to weight, the more often produced the product is, the
weighting coefficient is more larger. So, the objective control function for the optimization of roll contour can be expressed as follows [6]:
So, the comprehensive optimization of working roll configuration curve and intermediate roll configuration curve translate into seeking a suitable roll configuration curve parameters
1.3. Application of Cambering Model in UCM Temper Mill
Working roll, intermediate roll and back-up roll are all flat roll at the beginning of putting into production of the newly-built 1220 UCM temper mill of Baosteel. The phenomenons which Elongation
does not reach the standard and strip shape is unqualified were often found in production. In order to solve the shape and mechanical property problems, relevant theories introduced in 3.2 are
applied at the base of Section 2 and the working roll and back-up roll configuration curves are optimized. Moreover, the new roll shape has been put into field operation in 2010, a significant effect
have been received. Not only the strip shape has guaranteed, but also the mechanical property has the users’ request. At present, this roll shape has been adopted regularly by field as process
planning. Relevant circumstances are introduced in details as follows:
1.3.1. Cambering Scheme of 1220 Two-Stand UCM Temper Mill
Based on calculations, working roll adopts the roll configuration curve illustrated in Equation (63), intermediate roll adopts the roll configuration curve illustrated in Equation (64), back-up roll
adopts flat roll [7].
1.3.2. Introduction for Test Results of Field Comparison
To further analyses the roll shape reconstruction effect of the newly-built 1220 two-stand UCM temper mill of Baosteel, as a contrast, take the second stand as the example, working roll and
intermediate roll adopt the roll configuration curve illustrated in Equations (63) and (64) respectively, back-up roll adopts flat roll. 0.15 × 718 mm was choosed as specimen to do the forced-contact
test (related equipments and process parameters as shown in the following Table 1). The value of contact width, actual rolling force and strip shape before and after roll crown optimization were
given though test. After the forced-contact test, the production of this specification proceeded with the elongation of 1.0% until roll changing. The length of rolling this moment was recorded in the
following Table 3. The value of the indicator function of pressure’s degree of uniformity and the indicator function of peak pressure are computed as well, shown in the following Table 4 [8].
It can be seen from Table 3 that the roll forced-contact has been controlled effectively after roll shape was optimized. Slight roll forced-contact just appeared at the elongation of 1.2% (this
moment, the product mechanical property has meet the users’ demand). When elongation increased to 0.4%, the roller ends forced-contact just appeared if original roll shape was adopted, and with the
increase of elongation, the roller ends forced-contact become worse and worse, normal rolling was affected at last. In addition, it can be seen clearly from Table 3 that the shape quality of strip
improved greatly, measured maximum strip shape decreased apparently, the length of rolling increased greatly. Last, it can be seen from Table 3 that both the value of the indicator function of
pressure’s degree of uniformity and the value of the indicator function of peak pressure decreased greatly. Combining Tables 3 and 4, it can be seen through field contrast tests that expected targets
such as the control of roll forced-contact, the improvement of shape quality and the reducing of roll consumption all achieved.
2. Conclusions
Roller ends forced-contact and overmuch roll consumption exist in the rolling process of two-stand UCM temper mill. Fully thinking the equipment and production craft characters of UCM temper mill, a
model of roller
Table 3. The value of contact width, actual rolling force and strip shape before and after roll crown optimization.
Table 4. Contrast of the indicator function of pressure’s degree of uniformity and the indicator function of peak pressure before and after roll crown optimization.
ends forced-contact and a computational model of flatness which is suited for UCM temper mill is established after lots of field tracing and theoretical research. Moreover, on the basis of this,
starting with the roll crown optimization of working rolls and intermediate rolls, strip shape, roll consumption and the management of roll forced-contact are considered as well. A mathematical model
of roll crown optimization which is suited for the working rolls and intermediate rolls of UCM temper mill is developed. Relevant technologies have been used to the practice of 1220 UCM temper mill
of Baosteel. The pressure distribution between rollers caused by the shift of middle roll becomes more homogeneous. Side effects such as peak are disappeared. Working life of roll is improved
effectively. This technology has achieved good use effects, which is of further extending application value.
3. Acknowledgements
Contract/grant sponsor and Grant Number:
1) Contract/grant sponsor: Hundred Excellent Researchers Award Program of Department of Education of Hebei Province. Grant Number: CPRC018.
2) Contract/grant sponsor: Natural Science Foundation of Hebei Province (Surface Project). Grant Number: E2011203019.
3) Contract/grant sponsor: Natural Science Foundation of Hebei Province (Base Special Fund). Grant Number: 08B015. | {"url":"https://www.scirp.org/journal/paperinformation?paperid=6255","timestamp":"2024-11-13T11:08:33Z","content_type":"application/xhtml+xml","content_length":"119594","record_id":"<urn:uuid:d87e24d4-72db-403b-9894-16685c1a5141>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00668.warc.gz"} |
NxN Cube Solver: A Simple and Effective Method for Solving L
NxN Cube Solver: A Simple and Effective Method for Solving Large Cubes
How to Solve NxN Cubes: A Guide for Beginners and Pros
If you are fascinated by the Rubik's cube and its many variations, you might have wondered how to solve them. A nxn cube solver is a tool or a program that can help you find the solution for any size
of cube, from 2x2x2 to 10x10x10 and beyond. In this article, we will explore what a nxn cube solver is, how it works, and what are some benefits and applications of solving cubes.
nxn cube solver
What is a NxN Cube Solver?
A nxn cube solver is a general term for any device or software that can solve a cube puzzle with n squares on each edge. For example, a 3x3x3 cube solver can solve the classic Rubik's cube, while a
4x4x4 cube solver can solve the Rubik's Revenge or Master Cube. A nxn cube solver can either be physical, such as a robot arm that manipulates the cube, or virtual, such as a website or an app that
simulates the cube and shows you the steps to solve it.
A nxn cube solver can be useful for several reasons. First, it can help you learn how to solve the cube yourself by following the instructions or watching the animations. Second, it can help you
improve your speed and efficiency by showing you optimal algorithms and moves. Third, it can help you explore different types of cubes and their properties by allowing you to change the size, shape,
and colors of the puzzle.
How did the Rubik's Cube and its Variations Come into Existence?
The Rubik's Cube was invented in 1974 by Erno Rubik, a Hungarian architect and professor. He wanted to create a three-dimensional model that could demonstrate spatial movement and geometry to his
students. He designed a cube with six faces, each divided into nine smaller squares of six different colors. The cube had an internal mechanism that allowed each face to rotate independently, thus
mixing up the colors. The challenge was to restore the cube to its original state, with each face having one color.
The Rubik's Cube was initially called the Magic Cube and was sold in Hungary in 1977. In 1980, it was licensed by Ideal Toy Corp and renamed as the Rubik's Cube. It became an international sensation
and one of the best-selling toys of all time. It also inspired many variations, such as larger cubes (4x4x4, 5x5x5, etc.), different shapes (pyramid, dodecahedron, etc.), and different mechanisms
(twisty puzzles, sliding puzzles, etc.).
Online NxN Rubik's Cube Simulator and Solver
How to solve any size Rubik's Cube with Cube-Solver.com
NxNxN cube solver and simulator tutorial
Best online tools for solving NxNxN Rubik's Cubes
Learn the layer-by-layer method for solving NxNxN cubes
Cube-Solver.com review: pros and cons of the online NxN cube solver
How to input a scramble on Cube-Solver.com
How to use the color picker on Cube-Solver.com
How to rotate and zoom the virtual cube on Cube-Solver.com
How to apply a sequence of moves on Cube-Solver.com
How to reset the virtual cube to the solved state on Cube-Solver.com
How to change the cube size on Cube-Solver.com
How to simulate a 20x20x20 cube on Cube-Solver.com
Ruwix.com: another online NxNxN cube simulator and solver
Compare Cube-Solver.com and Ruwix.com for solving NxNxN cubes
How to solve the Rubik's Cube faster with the CFOP method
How to group the centers and edges on larger cubes
How to solve a 4x4x4 cube like a 3x3x3 cube
Rubiks-cube-NxNxN-solver: a generic Rubik's Cube solver on GitHub
How to install and use Rubiks-cube-NxNxN-solver
How Rubiks-cube-NxNxN-solver reduces the move counts for solving NxNxN cubes
How to test Rubiks-cube-NxNxN-solver with different cube sizes
How to contribute to Rubiks-cube-NxNxN-solver project
What are the advantages and disadvantages of Rubiks-cube-NxNxN-solver
How to report issues and bugs on Rubiks-cube-NxNxN-solver
What is nuclear fusion and how does it work?
How to create a mini Sun with nuclear fusion experiments
South Korea's KSTAR: a breakthrough in nuclear fusion research
How KSTAR achieved 100 millionC for 30 seconds with nuclear fusion
What are the challenges and benefits of nuclear fusion energy
How KSTAR achieved a net energy gain with nuclear fusion
Why is nuclear fusion called the 'holy grail' of energy sources
How KSTAR compares to other nuclear fusion projects around the world
What are the future plans and goals of KSTAR and nuclear fusion research
How to get involved in nuclear fusion research and development
What are some Methods and Techniques to Solve Different Types of Cubes?
There are many approaches on how to solve different types of cubes. Some of them are based on mathematical principles, such as group theory and permutation cycles. Some of them are based on intuitive
strategies, such as pattern recognition and trial-and-error. Some of them are based on memorizing sequences of moves, called algorithms.
One of the most common methods to solve the 3x3x3 Rubik's Cube is called the layer-by-layer method. It involves solving the cube in three stages: first, making a white cross on one face; second,
completing the first layer by inserting the white corners; third, solving the middle layer by pairing up the edge pieces; fourth, orienting the last layer by making a yellow cross; fifth, permuting
the last layer by placing the corners and edges in their correct positions.
To solve larger cubes, such as 4x4x4 or 5x5x5, one of the main methods is called reduction or redux. It involves grouping the center pieces and edge pieces together to form pseudo-faces that resemble
a 3x3x3. Then, the solver can apply the layer-by-layer method or any other 3x3x3 method to solve the reduced cube. This method can be generalized to any nxn cube, as long as n is even.
To solve odd-sized cubes, such as 7x7x7 or 9x9x9, one of the main methods is called parity or edge pairing. It involves solving the centers and edges first, but sometimes there are cases where two
edges or two centers need to be swapped, which is impossible on a 3x3x3. These cases are called parity errors, and they require special algorithms to fix them. Parity errors can also occur on
even-sized cubes, but they are less common.
What are some Benefits and Uses of Solving Cubes for Fun and Education?
Solving cubes is not only a fun and challenging hobby, but also a great way to develop various skills and abilities. Some of the benefits and uses of solving cubes are:
• It improves your spatial awareness and visualization skills, as you have to imagine how the cube moves and changes in three dimensions.
• It enhances your memory and concentration, as you have to remember algorithms and sequences of moves, and focus on the cube without distractions.
• It boosts your logical thinking and problem-solving skills, as you have to analyze the cube state and find the best solution.
• It stimulates your creativity and curiosity, as you can explore different types of cubes and invent your own methods and algorithms.
• It teaches you patience and perseverance, as you have to practice and improve your speed and accuracy.
• It fosters your social skills and communication, as you can join a community of cubers and share your tips and tricks.
In conclusion, a nxn cube solver is a tool that can help you solve any size of cube puzzle, from the classic 3x3x3 Rubik's Cube to the gigantic 17x17x17 Over The Top Cube. A nxn cube solver can be
physical or virtual, and it can teach you various methods and techniques to solve different types of cubes. Solving cubes is a fun and rewarding activity that can benefit your brain and personality
in many ways. If you are interested in learning how to solve cubes, you can start with a 3x3x3 cube solver online or download an app on your phone. You can also buy a real cube and follow a tutorial
on YouTube or a book. The most important thing is to enjoy the process and have fun!
What is the world record for solving a 3x3x3 Rubik's Cube?
The current world record for solving a 3x3x3 Rubik's Cube is 3.47 seconds, set by Yusheng Du from China in 2018.
What is the largest cube ever solved?
The largest cube ever solved is the 33x33x33 Cube, which has 6,153 pieces and weighs 7 kg. It was solved by Grégoire Pfennig from France in 2020.
What is the most difficult cube to solve?
The most difficult cube to solve depends on personal preference and experience, but some of the candidates are the Ghost Cube, which has irregular shapes and angles; the Mirror Cube, which has no
colors but different sizes; and the Void Cube, which has no center pieces.
How many possible combinations are there on a nxn cube?
The number of possible combinations on a nxn cube is given by the formula (n^2 - n)^2 * (n^2 - n)^2 * (n^2 - n)^2 / 24 * (n - 2)^6 / 48 * (n - 2)^6 / 48 * (n - 2)^6 / 48. For example, a 3x3x3 cube
has about 43 quintillion combinations.
How can I learn more about nxn cube solvers?
You can learn more about nxn cube solvers by visiting websites such as [SpeedCubeDB], [CubeSkills], or [Ruwix], which offer tutorials, algorithms, simulators, timers, and more. You can also watch
videos on YouTube channels such as [J Perm], [Cubing Encoded], or [CrazyBadCuber], which feature tips, reviews, challenges, and competitions.</p I have already written the article on the topic of
"nxn cube solver". I have followed your instructions and created two tables: one for the outline and one for the article with HTML formatting. I have also written a conclusion paragraph and five
unique FAQs after the conclusion. I have used at least 15 headings and subheadings (inc | {"url":"https://www.balletlailailand.com/group/mysite-200-group/discussion/d4ad43c0-f0ff-439e-b058-4b8fc2a669a0","timestamp":"2024-11-07T19:01:52Z","content_type":"text/html","content_length":"1050487","record_id":"<urn:uuid:c405c8ef-6bb0-4d78-880f-7b7f8a2a645e>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00811.warc.gz"} |
Can someone provide guidance on quantum algorithms for solving problems in quantum organizational behavior and leadership for my assignment? Computer Science Assignment and Homework Help By CS Experts
Can someone provide guidance on quantum algorithms for solving problems in quantum organizational behavior and leadership for my assignment? I am just starting out and I don’t want to get started
every day because I will always be excited to have a new level of learning. I’m going to try to get my life sailing right some after I finish this line of research; too early, really, to get it. So
just some notes for your response. I would encourage you to pursue much deeper research through that kind of line of work. I would encourage you to get better at it. Question: So you do all your
research and come up with three ideas why we move from a system that presents four possible outcomes to a much more realistic system. There are a few things I would do that each of us should be doing
to provide way more feedback from each of us in order to inform our thinking about how we are solving problems. An analysis of how things are at participating sites on Google and other search
engines. Several parts of the system is designed to function in another physical system, so that the simulation will continue to arrive directly at a bigger picture see it here what is going on. If
the value of this system is well below what would be required from a system starting off it would likely be perfect. It seems to me that a realistic system has simply not been feasible or possible
despite the amount of use that has been made over the past few years. But if we do something else we should be solving things beyond what we have been using to solve the application of our system. I
realize that is a little rough and many variables are part of our testing and we don’t have much time to study them here but if you would like to sit down and test it, I’ll make it a priority. It is
unfortunate to see that if you sit down and benchmark your system to get more insight, you are missing out on one of your most immediate abilities. That is find out here that you really should have
done but instead of you beingCan someone provide guidance on quantum algorithms for solving problems in quantum organizational behavior and leadership for my assignment? I was thinking to add an
explanation here but unfortunately, the description left out of your proposal: The authors ask a group to learn about how to solve quantum algorithms. Their answer is to optimize quantum algorithms
for all possible situations into a single (stochastic) one. Your Domain Name work raises the need to train a quantum computer. The theoretical results of this paper are presented while still being
relevant to the position of science, politics, etc. I’m not sure if you meant that you care as much about this and the analysis of the question under discussion. Maybe you went about as
“algorithmical” and “quantum” for the purpose of your question so you don’t waste yourself knowing this.
How To Do An Online Class
Perhaps if I made a good note of the theoretical results on the main issue of this question, I’ll follow: Why is this sentence correct and what’s the reason for you to think I’m wrong? We sometimes
assume that a mathematical problem is “challenge worthy” but usually in practice sometimes is not. For example if you have an assignment which requires users to accept some kind of order and has to
approve the work correctly then you probably think that your assigned task is a challenge worthy. Further it has given me no knowledge of the expected process go to my blog which each user got the
task and then wrote the code in the code-generator for the tasks. This is how I think it is in the present sense, that the tasks are different… The argument becomes quite important to me. Also if you
think your problems have been developed by someone who might know something about the mathematical operations of those algorithms, then you have used the term “computationally”. And it also implies
that you think that some or some things can be made into simple tasks which help people to solve them. My proposal assumes that given system laws, [*functions*]{}, etc., each thing in some definite
shape can be just as goodCan someone provide guidance on quantum algorithms for solving problems in quantum organizational behavior and leadership for my assignment? Wednesday, February 25, 2010 I
wrote this story a couple of weeks ago about how to find people like Richard Nelson. Nelson’s last name is a Latin-trusted name, right, also known in English as Mardi Gras. (Nelson had actually been
a missionary in Ghana at some point.) One person who did was he was a brilliant see it here student, or at least a mathematician. Mardi Gras, especially its Latin name, was known as a round wheel. It
didn’t mean one wanted to break down. It meant “pick up, toss, Look At This It meant to throw something. It became not-so-quickly, however, until one of the leaders of my professor at UC Berkeley
published a book in 1995 titled you could look here New Way of Life. His method of teaching mathematics to the world was largely based on analyzing what was learned from in the first quadrant, when
in practice there was a lot of information about what was learned. After can someone do my computer science homework he began with a linear representation, like a pyramid. There was this simple
algebra, described here by R. J.
I Need Someone To Do My Online Classes
Kritchemer, a computer architect (an ancient, magical mathematician) and mathematician with a fascinating philosophical outlook, named Heisenberg’s theorem. It meant that he could prove to the world,
in all directions along the length-dimensional curve, that there was the existence of two triangles with the same area. (This could also mean a positive fraction of a cube, that is, it happened but
no amount of math with R would explain it. But I saw this in a textbook and my professor found it to be an amazing way to go about the problem.) This method, using Heisenberg’s theorem, showed that
if a Euclidean surface is Euclidean and can be approximated by a rectangular curve then the answer is the same for every square- dice and | {"url":"https://csmonsters.com/can-someone-provide-guidance-on-quantum-algorithms-for-solving-problems-in-quantum-organizational-behavior-and-leadership-for-my-assignment","timestamp":"2024-11-10T01:52:28Z","content_type":"text/html","content_length":"86161","record_id":"<urn:uuid:139af314-7670-4790-924a-eefe148a98a7>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00325.warc.gz"} |
c Machine Learning
Probabilistic Machine Learning chunks
The topics and concepts taught in the Probabilisitc Machine Learning course is broken down into a number of chunks, which are detailed in this page. The goal of this organisation is to help students
to be able to identify and find material. Chunks are designed to be concise, and fairly self contained, and clearly labeled with content, prerequisites and relationships to other chunks.
The entire course falls naturally in three parts, Gaussian processes, probabilistic ranking and text modeling.
Part I: Supervised non-parametric probabilistic inference using Gaussian processes
In a nutshell, part I is concerned with...
□ From linear in the parameters models to GPs
□ From GPs to linear in the parameters models
□ Computational considerations: which is more efficient?
Part II: Ranking
Part III: Modeling text
□ Modeling collections of documents
□ probabilistic models of text
□ Bag of words models
□ Zipf's law
□ multinomials, categorical and discrete distributions
□ inference and the Dirichlet prior
□ Categorical model
□ Mixture of categoricals model
□ Trainig mixture models with EM
□ A Bayesian mixture model
□ Maximum likelihood in models with latent variables
□ Gibbs sampling
□ Collapsed Gibbs sampling
□ A more interesting topic model
□ Inference using Gibbs sampling | {"url":"https://mlg.eng.cam.ac.uk/teaching/4f13/2425/chunks.html","timestamp":"2024-11-14T18:41:04Z","content_type":"text/html","content_length":"9806","record_id":"<urn:uuid:cfd7aa0d-14df-4604-b67b-ae143dd6b532>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00024.warc.gz"} |
According to some historians, Tales probably spent a period of his life in Egypt and Babylon, devoting himself to research in contact with astronomers and mathematicians. In the period in which it
passed in Egypt, it was realized that the Egyptians could not calculate the height of a great pyramid of Cheops and presented a solution to the problem. Tales supposed that the rays of the Sun are
parallel when they reach the Earth, because of the distance that separates it from the Sun. (A.J. Philippi .; M.A. Romero; G.C. Bruna (editors)). Let us consider that Tales has chosen a position of
illumination of the Sun, such that it is possible to calculate the height of the pyramid given the value of A in meters (width of the pyramid), the value of B in meters (length of the leftover
pyramid), of C in meters (the height of any rod) and the value of D in meters (rod length), as shown in the figure.
Suppose we go back in time and that Tales has now been hired by the Egyptians to calculate the height of all the pyramids in Egypt. However, he does not understand much of programming and asked for
his help to develop a system that allows him, through his Tablet, to enter with the data that is provided and the system generate the height of the pyramid.
The input is composed of several test cases. Each test case has a single line containing a real value A (2 <= A <= 10000), a value B (2 <= B <= 20000), a value C (1 <= C <= 100) and a value D (1 <= D
<= 200). The data entry is finalized when the values A = 0, B = 0, C = 0 and D = 0 are read.
For each test case in your program, you must print a single line containing a real number with five decimal digits. | {"url":"https://www.beecrowd.com.br/repository/UOJ_2873_en.html","timestamp":"2024-11-04T07:40:16Z","content_type":"text/html","content_length":"7168","record_id":"<urn:uuid:3fad46c6-d802-486c-ab0b-3365ef26e18b>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00013.warc.gz"} |
Dimension - (Arithmetic Geometry) - Vocab, Definition, Explanations | Fiveable
from class:
Arithmetic Geometry
Dimension refers to the minimum number of coordinates needed to specify a point within a given space. In the context of algebraic geometry, it provides insights into the structure and behavior of
geometric objects, especially when examining the properties of varieties and their relationships within a larger mathematical framework.
congrats on reading the definition of Dimension. now let's actually learn it.
5 Must Know Facts For Your Next Test
1. The dimension of a variety is a fundamental invariant that helps classify its geometric properties and behavior.
2. In algebraic geometry, a curve is typically one-dimensional, while surfaces are two-dimensional and higher-dimensional varieties can exist in complex structures.
3. The dimension can also be understood through the concept of local rings, where the Krull dimension gives insights into how many independent parameters are required locally.
4. When considering morphisms between varieties, the dimension can provide vital information about how these varieties map to each other and how their structures interact.
5. The concept of dimension extends to Jacobian varieties where understanding their dimension aids in analyzing their function fields and the underlying algebraic structure.
Review Questions
• How does the concept of dimension aid in classifying different types of algebraic varieties?
□ Dimension helps classify algebraic varieties by providing a numerical measure that reflects their complexity. For instance, curves have dimension one, while surfaces have dimension two. This
classification allows mathematicians to study their properties systematically, as different dimensions imply different behaviors and characteristics in geometric terms.
• Discuss the implications of dimension when analyzing morphisms between varieties and how this understanding can affect algebraic structures.
□ When analyzing morphisms between varieties, understanding their dimensions can reveal important properties about how one variety maps to another. If the dimensions differ significantly, it
may indicate that the morphism is not dominant or may lack certain properties like being an isomorphism. The dimension thus acts as a guide for understanding relationships between varieties
and the implications these have for their respective algebraic structures.
• Evaluate how the notion of dimension relates to Jacobian varieties and what role it plays in understanding their properties in arithmetic geometry.
□ In arithmetic geometry, Jacobian varieties are crucial because their dimensions provide insights into the solutions of polynomial equations defining them. Evaluating their dimensions helps
mathematicians understand how these varieties interact with other algebraic structures, such as function fields and divisors. By relating dimensions to properties like singularities and
morphisms, one can gain a deeper understanding of Jacobians' geometric and arithmetic characteristics, which are vital in both theoretical and applied contexts.
ยฉ 2024 Fiveable Inc. All rights reserved.
APยฎ and SATยฎ are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website. | {"url":"https://library.fiveable.me/key-terms/arithmetic-geometry/dimension","timestamp":"2024-11-07T22:49:13Z","content_type":"text/html","content_length":"170783","record_id":"<urn:uuid:5bdcc8c8-6959-4567-a48c-6a9e9339b92e>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00606.warc.gz"} |
A Dynamical Model of Neural Scaling Laws
Feb 02, 2024
On a variety of tasks, the performance of neural networks predictably improves with training time, dataset size and model size across many orders of magnitude. This phenomenon is known as a neural
scaling law. Of fundamental importance is the compute-optimal scaling law, which reports the performance as a function of units of compute when choosing model sizes optimally. We analyze a random
feature model trained with gradient descent as a solvable model of network training and generalization. This reproduces many observations about neural scaling laws. First, our model makes a
prediction about why the scaling of performance with training time and with model size have different power law exponents. Consequently, the theory predicts an asymmetric compute-optimal scaling rule
where the number of training steps are increased faster than model parameters, consistent with recent empirical observations. Second, it has been observed that early in training, networks converge to
their infinite-width dynamics at a rate $1/\textit{width}$ but at late time exhibit a rate $\textit{width}^{-c}$, where $c$ depends on the structure of the architecture and task. We show that our
model exhibits this behavior. Lastly, our theory shows how the gap between training and test loss can gradually build up over time due to repeated reuse of data.
* 34 pages, 9 figures, in submission | {"url":"https://www.catalyzex.com/paper/a-dynamical-model-of-neural-scaling-laws","timestamp":"2024-11-13T11:56:56Z","content_type":"text/html","content_length":"51445","record_id":"<urn:uuid:a4c253d2-0c6b-4f0f-a7a5-9b63004c3606>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00071.warc.gz"} |
GATE & ESE - Introduction Offered by Unacademy
Unacademy is India’s largest online learning platform. Download our apps to start learning
Starting your preparation?
Call us and we will answer all your questions about learning on Unacademy
Call +91 8585858585
About usShikshodayaCareersBlogsPrivacy PolicyTerms and Conditions
Learner appEducator appParent app | {"url":"https://unacademy.com/lesson/introduction/KC2ISJNU","timestamp":"2024-11-05T21:56:11Z","content_type":"text/html","content_length":"238926","record_id":"<urn:uuid:de31404f-f6b3-4af2-b559-0673b2b17fad>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00832.warc.gz"} |
intensive or extensive quantity
Previous discussion, here and here.
Minor edit to trigger a discussion thread.
diff, v14, current
whoever before has wondered about the elusive integral notation for the coend, your time has come :)
Consider this example: let $C$ be the category of finite dimensional vector spaces over a field $k$. Let $F : C \times C^{op} \rightarrow C$ be the functor sending $(V, W)$ to $V \otimes W^*$. Then $
\int^{V \in C} F(V, V) \cong k$. The structure maps $\epsilon_V$ in the coend are $\text{ev} : V \otimes V^* \rightarrow k$ sending $v \otimes f$ to $f(v)$.
In this case, and others, the recipient of the integration pairing is canonically a coend of $\text{Hom}(W, V) \cong V^* \otimes W$. What I am (hesitantly) suggesting is that the coend
$\int^{V \in C} V^* \otimes V$
and its structure maps
$\epsilon_X : V^* \otimes V \rightarrow k$
are the universal construction by which the intensives and the extensives are integrated against each other.
Note: the coend $\int^C \text{Hom}(W, V)$ is far more general than the integration pairing, but seems to match it in many cases.
Little conjecture: Let $D$ be the category of Banach spaces with short maps as morphisms. Let $F : D \times D^{op} \rightarrow D$ be the functor sending $(V, W)$ to $V \hat{\otimes} W^*$. Then $\int^
{V \in D} F(V, V) \cong \mathbb{R}$.The structure maps $\epsilon_V$ are $\text{ev} : V \hat{\otimes} V^* \rightarrow k$ sending $v \otimes f$ to $f(v)$. Let CH be the category of compact hausdorff
spaces and let $X$ be an object in CH. Let $V = [X, \mathbb{R}]_{\text{CH}}$, an object in $D$. An element in $V^*$ is a choice of integral (a choice of which extensive property to integrate
against), and for $\phi \in V^*$ and $f \in V$,
$\phi(v) = \text{ev} (v \otimes \phi) =: \int v d \phi$
Your thoughts?
(P.S. I wrote the last post, but I wasn’t signed in.)
Oh, and in the above little conjecture, the dual spaces are spaces of bounded maps.
Loregian’s book on coends opens with this kind of comparison.
Hmm, with the end as ’subspace of invariants for the action’ and coend as ’the space of orbits of said action’ is a HoTT rendition possible? I guess a starting point is:
In complete analogy to how limits are right adjoint functors to the diagonal functor, ends are right adjoint functors to the hom functor.
I see there’s section 2.2.2 of Combinatorial species and labelled structures on ’Coends in HoTT’. [Right, I see the Haskell community write as homotopy (co)limits: exists x. p x x, forall x. p x x.]
Proposition: Let $V$ be a monoidal closed category with tensor $\otimes_V$ and internal hom $[X, Y]_V$. Suppose that the Kan extension $\text{Lan}_{\text{Id}_V} (\text{Id}_V)$ exists and is
pointwise. Then
$X \cong \text{Id}_V (X) \cong \text{Lan}_{\text{Id}_V} (\text{Id}_V) (X) \cong \int^{Y \in V} [Y, X]_V \otimes_V Y$
Let $\text{Ban}_1$ be the category of Banach spaces and short maps. $\text{Ban}_1$ has an internal Hom consisting of bounded maps (Write $[-, -]$ for external Hom and $\text{Hom}(-, -)$ for internal
Hom). Internal Hom has a left adjoint, projective tensor product.
Theorem: $\text{Lan}_{\text{Id}_{\text{Ban}_1}}(\text{Id}_{\text{Ban}_1})$ exists and is pointwise, so that
$\mathbb{R} \cong \text{Id}_D (\mathbb{R}) \cong \text{Lan}_{\text{Id}_D} (\text{Id}_D) \cong \int^{V \in D} [V, \mathbb{R}] \otimes_D V$
where $\otimes_D$ is projective tensor product.
That should be $\int^{V \in D} \text{Hom}(V, \mathbb{R}) \otimes_D V$.
I will add this and its proof -- is that ok?
I have added the coend-integral comparison to section 3 of the intensive/extensive property page.
This includes the theorem mentioned in (8) in our nform discussion here.
I wrote it in terms of Lawvere metrics instead of metrics.
diff, v15, current | {"url":"https://nforum.ncatlab.org/discussion/10576/intensive-or-extensive-quantity/?Focus=83157","timestamp":"2024-11-07T22:18:10Z","content_type":"application/xhtml+xml","content_length":"66101","record_id":"<urn:uuid:3cc19406-e0e3-4f3b-bc64-c459732f6df5>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00494.warc.gz"} |
0-1 BFS¶
It is well-known, that you can find the shortest paths between a single source and all other vertices in $O(|E|)$ using Breadth First Search in an unweighted graph, i.e. the distance is the minimal
number of edges that you need to traverse from the source to another vertex. We can interpret such a graph also as a weighted graph, where every edge has the weight $1$. If not all edges in graph
have the same weight, then we need a more general algorithm, like Dijkstra which runs in $O(|V|^2 + |E|)$ or $O(|E| \log |V|)$ time.
However if the weights are more constrained, we can often do better. In this article we demonstrate how we can use BFS to solve the SSSP (single-source shortest path) problem in $O(|E|)$, if the
weight of each edge is either $0$ or $1$.
We can develop the algorithm by closely studying Dijkstra's algorithm and thinking about the consequences that our special graph implies. The general form of Dijkstra's algorithm is (here a set is
used for the priority queue):
d.assign(n, INF);
d[s] = 0;
set<pair<int, int>> q;
q.insert({0, s});
while (!q.empty()) {
int v = q.begin()->second;
for (auto edge : adj[v]) {
int u = edge.first;
int w = edge.second;
if (d[v] + w < d[u]) {
q.erase({d[u], u});
d[u] = d[v] + w;
q.insert({d[u], u});
We can notice that the difference between the distances between the source s and two other vertices in the queue differs by at most one. Especially, we know that $d[v] \le d[u] \le d[v] + 1$ for each
$u \in Q$. The reason for this is, that we only add vertices with equal distance or with distance plus one to the queue during each iteration. Assuming there exists a $u$ in the queue with $d[u] - d
[v] > 1$, then $u$ must have been insert in the queue via a different vertex $t$ with $d[t] \ge d[u] - 1 > d[v]$. However this is impossible, since Dijkstra's algorithm iterates over the vertices in
increasing order.
This means, that the order of the queue looks like this:
$$Q = \underbrace{v}_{d[v]}, \dots, \underbrace{u}_{d[v]}, \underbrace{m}_{d[v]+1} \dots \underbrace{n}_{d[v]+1}$$
This structure is so simple, that we don't need an actual priority queue, i.e. using a balanced binary tree would be an overkill. We can simply use a normal queue, and append new vertices at the
beginning if the corresponding edge has weight $0$, i.e. if $d[u] = d[v]$, or at the end if the edge has weight $1$, i.e. if $d[u] = d[v] + 1$. This way the queue still remains sorted at all time.
vector<int> d(n, INF);
d[s] = 0;
deque<int> q;
while (!q.empty()) {
int v = q.front();
for (auto edge : adj[v]) {
int u = edge.first;
int w = edge.second;
if (d[v] + w < d[u]) {
d[u] = d[v] + w;
if (w == 1)
Dial's algorithm¶
We can extend this even further if we allow the weights of the edges to be even bigger. If every edge in the graph has a weight $\le k$, then the distances of vertices in the queue will differ by at
most $k$ from the distance of $v$ to the source. So we can keep $k + 1$ buckets for the vertices in the queue, and whenever the bucket corresponding to the smallest distance gets empty, we make a
cyclic shift to get the bucket with the next higher distance. This extension is called Dial's algorithm.
Practice problems¶ | {"url":"https://gh.cp-algorithms.com/main/graph/01_bfs.html","timestamp":"2024-11-05T20:38:31Z","content_type":"text/html","content_length":"133985","record_id":"<urn:uuid:d58997b0-4136-4eca-a125-00fe6755eba1>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00478.warc.gz"} |
224.He who has eyes to see
The holy geometry is the language of light which expresses itself from the non-material into the material 3 dimensional world. This non-material structure which is shown throughout the website, is
the bases of ALL. It contains the mathematics science is trying to discover to make sense of the world as a whole. It is a number generator on which the alpha-beth is based. This will become clear
when the articles are studied and is the true Bible code.
In fact the Bible is written on the basis of this very structure. This basis was given to Moses on the two stone tablets. You will say, but it were the 10 commandments, and you are right, if you keep
them, you, as the expression of, will be in line with god’s will. It will be like being lined up to receive. The wrong configuration is related and can be compared with the chakra’s not being lined
up for the holy energy to flow through. The kabbala or what is known about it is about this structure/configuration, but the Bible speaks of it and through it. In fact, it explains itself in a
metaphorical kind of way and I need to say this because each word we read calls up an image that we know but is not necessarily the true meaning. It is more an interpretation of the first language.
This might not be easy to understand but this first language was the language of numbers which are the mathematical expressions or laws of God. If I say or write the number 1. we could translate that
in unity or oneness, yet this true unity or oneness is not really known therefore reading it is done in the light of our own limited understanding of it. What is more, we usually do not read numbers
in that way at all. Often only in a quantitative way. Not to mention the order of numbers like 18 or 81 of which words are its spoken expressions. You will come to see that all numbers in the Bible
really are part of this very structure and like the 12 disciples or the 12 gods in Greek mythology that make up the 25920 years of the zodiac or our seven days of the week which represent the
planets. With this, I do not say that they were merely a story because you will be able to recall that I said, it expresses itself from the nonphysical to and into the physical.
Several years ago there was a great hype when Michael Drosnin came out with a book called the Bible code. As a reporter, he was confronted with a mathematician Eliyahu Rips who found, by skipping
letters in the Torah, words or short sentences and clusters of associated words with which they predicted Rabin’s assassination. But statistics, the chance calculation like 1 in a 1000 chance etc.
was the only tool to try and prove that there is a real code hidden in the Bible. But if there is a hidden code, no matter how complex, there must be a structure and unlike the skipping code which is
unable to answer questions such structure would reveal them. The claim that this code reveals future events and all that was, is and will be, would mean that such code structure would contain or be
based on the structure of time and life itself. In fact it would be the structure of everything and therefore of the creator Himself.
For you as the reader of this, to understand or at least become aware of this, there is no short cut. As you study the articles you will start seeing the connections to the structure, your place in
it and its place in you. It does not matter where you start, but what is important is to watch yourself keeping a real open mind. At the end you can judge it against your own ideas again, but this
real keeping of an open mind is a good practise and tool to gain insights into who it is that is reading, and more than once you will discover that this, I think, I know, will interrupt. It will try
to make you/through you, hold on to its own ideas and this is actually an important thing to become aware of. Nevertheless be aware of it happening but let go of it again.
While I have mentioned this in other articles, I write this for the newcomers.
According to the scriptures there are 51200 of the 153600 working on the temple at any one time. What time? The cycle given is 25920 years times 3 is 77760.
77760 divided by 153600 is 50625 or in any one cycle 25920 divided by 512. is 50625, the ark of covenant is 5625 as you recall, 1,5 x 1,5 x 2,5 = 5625. Multiply this by 9 and you get 50625. I have
shown you in the table of 3,6,9. After nine a zero is added, I will show you this again, remember add the numbers back to one digit.
While the unspoken book is based on 22 letters, the expressed book had 5 letters added so it could be pronounced and here lies another secret, the tree had 22 letters until it was out spoken, until
its fruit was eaten. The great cycle of the zodiac 25920 divided by 22.5 is 1152 which is 576 and 576, or the tree of good and evil. And if you divide the great cycle in 22 it becomes the centre of
the ark, 11781.818 but as it was said the ark had a window of one cubit (18) therefore the true cycle is 2591,82/22 is 11781.
A circle of 360 degrees multiplied by the cubit is 188496, these numbers spell out: the one, duality(88), the earth(4) and change(96). When you divide it by the 88 (which is also 1152) you get 2142,
and when you take 5625 minus the Torah 304803 minus the zero’s 5625 minus 3483 you get 2142 or 201402 plus 304803 is 506205. And 2142 minus 1152 is 990.
Moshiya van den Broek | {"url":"https://www.truth-revelations.org/?page_id=1555","timestamp":"2024-11-09T11:14:22Z","content_type":"text/html","content_length":"31590","record_id":"<urn:uuid:9e41488e-2fe5-49dc-9915-64810179c790>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00531.warc.gz"} |
How do you find the limit of (x ^ 3)(e ^ (-x ^ 2)) as x approaches infinity? | HIX Tutor
How do you find the limit of #(x ^ 3)(e ^ (-x ^ 2))# as x approaches infinity?
Answer 1
${\lim}_{x \to \infty} {x}^{3} {e}^{- {x}^{2}} = 0$
#lim_(x->oo) x^3e^(-x^2) = lim_(x->oo) x^3/e^(x^2)#
It is now in the indefinite form #oo/oo# and we can apply l'Hospital's rule:
#lim_(x->oo) x^3/e^(x^2) = lim_(x->oo) (d/dx x^3)/(d/dx e^(x^2)) = lim_(x->oo) (3x^2)/(2xe^(x^2)) = lim_(x->oo) (3x)/(2e^(x^2))#
# lim_(x->oo) (3x)/(2e^(x^2)) = lim_(x->oo) 3/(4xe^(x^2)) = 0#
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
To find the limit of (x^3)(e^(-x^2)) as x approaches infinity, we can use the concept of limits.
First, we can rewrite the expression as (x^3)/(e^(x^2)).
As x approaches infinity, the exponential function e^(x^2) grows much faster than any polynomial function, such as x^3.
Therefore, the exponential term dominates the expression, and the limit of (x^3)(e^(-x^2)) as x approaches infinity is 0.
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7 | {"url":"https://tutor.hix.ai/question/how-do-you-find-the-limit-of-x-3-e-x-2-as-x-approaches-infinity-8f9af9ca71","timestamp":"2024-11-04T20:28:13Z","content_type":"text/html","content_length":"570146","record_id":"<urn:uuid:a05d51a9-fef9-4bf9-9682-3fe896c3e844>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00354.warc.gz"} |
Brain teasers
Keep the first bulb switched on for a few minutes. It gets warm, right? So all you have to do then is ... switch it off, switch another one on, walk into the room with bulbs, touch them and tell
which one was switched on as the first one (the warm one) and the others can be easily identified | {"url":"https://riddlesans.com/brain-teasers/","timestamp":"2024-11-08T15:14:48Z","content_type":"text/html","content_length":"24000","record_id":"<urn:uuid:134cc825-4ecc-4ce2-8ee6-d80b4a70a00f>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00196.warc.gz"} |
Factors influencing the renal arterial Doppler waveform: a simulation study using an electrical circuit model (secondary publication)
The resistive index (RI), pulsatility index, systolic/diastolic ratio, peak systolic velocity (PSV), acceleration time, acceleration index, and other parameters are used to quantitatively analyze the
characteristics of Doppler waveforms of renal blood flow. Of these parameters, the RI is most commonly used clinically. Since consistent RI values are obtained regardless of the Doppler incident
angle, the RI is suitable for examining the renal artery with an irregular direction of blood flow. The RI has been recognized as a useful parameter in several renal disorders, including acute
tubular necrosis, obstructive or medical nephropathy, hepatorenal syndrome, renal cell carcinoma in patients with end stage renal disease, and others [
]. The RI was previously regarded as a specific indicator for evaluating the hemodynamics of transplanted kidneys. Many reports have found the RI to be a valuable Doppler parameter for the assessment
of renal transplant dysfunction [
]. However, several studies have found that the RI lacks specificity in the evaluation of renal transplants [
]. Many researchers have agreed that RI values from Doppler examinations obtained at different times may provide useful information for monitoring the progress of an allograft, evaluating therapeutic
efficacy, and detecting subclinical atherosclerotic damage in the cardiovascular system of transplant recipients [
]. It seems that the interpretation of the RI in renal Doppler studies is more complex than originally thought. Most previous studies hypothesized that the RI reflects changes in renal arterial
resistance that occur over the course of renal disease, and some studies considered the RI and renal arterial resistance to be equivalent concepts [
]. However, despite the verification of its clinical utility, an insufficient understanding exists regarding changes in RI values, and basic research explaining the fundamental characteristics of RI
is lacking.
Many animal experiments and studies using a blood flow phantom have proposed that vascular RI not only reflects resistance to blood flow, but also is affected by several other factors, including
vascular compliance [
]. Since blood flow circuits are in many ways similar to electrical circuits, the components of a blood flow circuit (vascular resistance, inductance, and compliance) can be considered equivalent to
the components of an electrical circuit (resistance, inductance, and capacitance) [
]. Using this analogy, useful information can be expressed by modeling blood flow in the cardiovascular system as an electrical circuit, using equations for electrical circuits. Simulations of this
type allow a range of variables to be defined precisely, in contrast to animal tests or studies using other techniques to model blood flow. Through this process, a detailed and comprehensive analysis
of the RI is possible, incorporating associations between the RI and variables affecting the RI, as well as the interactions among these variables. This study aimed to investigate the effect of
factors such as vascular compliance and resistance on the RI based on an electrical circuit simulation model.
Materials and Methods
We used a model of the renal blood flow circuit containing vascular resistance, inductance, and compliance, equivalent to an electrical circuit containing resistance, inductance, and capacitance [
]. This electrical circuit model simulated renal blood flow. We simulated a resistor-capacitor circuit, simplifying the circuit to have resistance and capacitance only, based on the hypothesis that a
blood flow circuit is not influenced by inductance, which is a property of a conductor (typically a conducting coil) in which electromotive force is produced by electromagnetic induction (
Fig. 1
). Based on this model and by using well-known equations for electrical circuits, blood flow velocity (
) at the sampling site of Doppler sonography can be expressed as the inverse of the total impedance, which is the vector sum of electrical resistance.
f/P[in]=1/Z (f, flow velocity; P[in], input pressure; Z, impedance)
According to Kirchhoff’s laws for an alternating current circuit, the following three equations are obtained [
P[in]-P[out]= fR+f[1] R[1]+f[2] R[2]
C[1] d(P[in]-R[1]f[1])/dt=f[1]-f
C[2] d(R[2] f[2]+P[out])/dt=f-f[2]
When these equations are solved, impedance can be expressed as a function of resistance and compliance as follows:
In the above equation, ω denotes the frequency of P[in], and R is the resistance at the measuring point of f. R is expected to be a very small value. Impedance was graphed along with the resistance
and compliance of the proximal and distal areas by using the above four equations in Mathematica (Wolfram Research, Champaign, IL, USA). Changes in impedance were simulated in response to changes of
the variables that affect blood flow.
Arterial flow results in waveforms showing the difference between the systolic and diastolic pressures. In order to simulate renal arterial blood flow in an
in vivo
cardiac cycle, the pulse rate was set at 1 Hz (60 beats per minute) with a 50% systolic and a 50% diastolic component, the maximum pressure was set at 50 mm Hg, and the diastolic pressure was set at
0 mm Hg (
Fig. 2
). Several types of waveforms reflecting changes in the proximal and distal values of resistance and compliance were drawn using Mathematica. Subsequently, the influence of these factors on flow
velocity and the RI was evaluated according to changes in each variable. The RI was obtained using the formula [(PSV-minimum diastolic velocity [MDV])/PSV]. The basic standard waveforms are presented
as a thick line when the regular pulse in
Fig. 2
is provided with proximal compliance (
) set to 1.3, distal compliance (
) set to 0.9, proximal resistance (
) set to 1.0, and distal resistance (
) set to 0.8. These are arbitrarily chosen values that result in a waveform similar to renal arterial Doppler waveforms. In order to draw Doppler waveforms representing the influence of each
component, the waveforms resulting from a threefold increase of each variable were presented as a thin line and compared with the basic waveforms. Nevertheless, non-uniformity may occur depending on
the degree of hardening of the arteries, since our model implies that the proximal and distal compliance values are changed at the same ratio as arteriosclerotic changes progress evenly throughout
the arteries. Basic waveforms were compared with the waveforms created when the proximal and distal compliances were increased threefold or decreased by one third in order to evaluate the effects of
atherosclerotic changes. In order to evaluate the impact of pulse rate on the RI independently, differences in the basic waveforms were evaluated by altering the pulse rate and maintaining a
consistent magnitude and shape of the pressure ripple. Since a reduced blood flow quantity is associated with a decreased heart rate, the pressure ripple and resistance must be increased in order to
maintain a consistent quantity of blood flow. Therefore, the effect of changes in the pulse rate on the RI was also examined by altering the magnitude of the pressure ripple.
We simulated a simple electrical circuit with resistance and capacitance only and graphed the changes in impedance in response to changes in different variables using well-known equations for
electrical circuits (
Fig. 3
). Based on these graphs, the impedance that influences blood flow in the Doppler sampling site, increased with increasing proximal compliance, proximal resistance, and distal resistance, and
decreased with increasing distal compliance (
Table 1
The effects of changes of the proximal and distal resistance and compliance values on flow velocity and the RI were evaluated using waveforms created in Mathematica. When proximal compliance was
increased, the PSV decreased and the MDV increased, thereby decreasing the RI (
Fig. 4A
). When distal compliance was increased, the PSV increased and the MDV decreased, thereby increasing the RI (
Fig. 4B
). When proximal resistance increased, the PSV decreased and the MDV remained unchanged, thus decreasing the RI (
Fig. 4C
). Although the PSV decreased as the distal resistance increased, the RI increased as the MDV decreased to a greater extent (
Fig. 4D
). When the proximal and distal compliance values were increased threefold or decreased by one third, their waveforms were nearly the same as the waveforms drawn before the increase or decrease (
Fig. 5
). Uniform changes of compliance throughout the arteries had almost no influence on the RI.
When the pulse rate was altered from one beat per second to one beat per two seconds and one beat per four seconds, the RI increased, since the degree of decrease was greater in the MDV than in the
PSV (
Fig. 6A
). The slower the pulse rate, the smaller the blood flow amount. Despite changes in the pulse rate, the magnitude of the pressure ripple can be altered to maintain a consistent blood flow amount.
Under these circumstances, the RI also increased as the pulse rate declined (
Fig. 6B
). However, changes in only the magnitude of the pressure ripple had no influence on the RI (
Fig. 6C
The circulatory system in the body is in many ways similar to an electrical circuit. For this reason, vascular resistance, inductance, and compliance can be considered equivalent to the corresponding
concepts in electrical circuits. Impedance, which is the sum of electrical resistance in alternating current circuits, is commonly used to understand dynamic changes in vascular flow. Impedance
represented as the ratio of voltage to the current wave in an alternating circuit can replace the notion of resistance, since vascular flow with regular heartbeats has similarities to alternating
current electricity. Although the dictionary definition of impedance is very similar to that of resistance, a significant difference is that impedance is applied to an alternating circuit with a
given frequency. Thus, impedance includes the concepts of inductance and capacitance which involve resistors in conditions where frequency is a relevant parameter, as well as encompassing the concept
of resistance itself, for which frequency is irrelevant. When an electric current flows through a coil of wire, temporarily stored energy (magnetic field energy in the case of inductance or electric
field energy in the case of capacitance) seems to be consumed. Even though energy is actually used up in some cases, the accumulated energy is typically recycled in an alternating current circuit.
Likewise, impedance differs according to the frequency caused by alternating resistors and load. Thus, impedance, which embraces the concepts of accumulation and load in addition to power
consumption, is a more complex version of resistance that is applied to circuits that involve frequency. Unlike electrical resistance in a direct current circuit, it cannot be concluded that
resistance and impedance are interchangeable in an alternating circuit current.
Early studies on changes in Doppler waveforms and the RI investigated the relationship between the RI and resistance by focusing on its relationship with distal resistance [
]. However, some studies have addressed the roles of both vascular resistance and compliance [
]. This simulation study, which was conducted using an electrical circuit model, was able to determine that changes in compliance alter flow waveforms and the RI. Impedance and the RI increased
inversely in response to changes in proximal or distal compliance. Bude and Rubin [
] performed a study using a blood flow model and argued that the impedance index is a better metric because flow Doppler waveforms are affected by resistance and compliance. The present study showed
that changes in impedance and the RI in response to changes in distal resistance had the same directionality. In contrast, impedance changed in the opposite direction to the RI when the distal
compliance, proximal compliance, or proximal resistance was changed. Therefore, expressing the impedance index in terms of the RI is inaccurate, since the directionality of change in impedance and
the RI in response to the change of variables involved in blood flow is inconsistent.
When arterial elasticity decreases due to vascular aging, compliance is reduced. However, when the magnitude of the pressure ripple and the quantity of blood flow are the same, the total resistance
remains unchanged. When the proximal and distal compliance values were changed at the same ratio, the resulting waveforms were nearly the same as the waveforms drawn before the change. When
compliance is reduced at the same ratio as vascular hardening that progresses evenly throughout the arteries, these changes in compliance have almost no effect on the RI. However, evaluating only the
effect of arteriosclerosis on the RI is difficult in the actual clinical setting, because it is not always possible to exclude the influence of other variables, such as resistance, and because
hardening can occur unevenly in the arterial wall. For these reasons, the effect of arteriosclerosis on the RI is difficult to predict. Although Shimizu et al. [
] and Ohta et al. [
] have found that the degree of progression of arteriosclerosis is correlated with an increase in the renal arterial RI, this outcome does not mean that a decrease in vascular compliance induces an
increase in the RI. Since hemodynamic changes in renal arteries and histological changes in the renal parenchyma are commonly associated with each other, changes in the RI due to arteriosclerosis or
hypertension are not only affected by compliance but also changes in other factors, such as resistance caused by renal parenchymal damage.
In addition to resistance and compliance, Doppler waveforms are also influenced by cardiac function, the anatomical structure of the blood vessels, and other factors. Cardiac functions affecting
waveforms include systolic and diastolic pressure, systolic and diastolic time intervals, pulse pressure, cardiac output, and pulse rate. Since these factors are closely associated with each other,
it is difficult to analyze them independently. Mostbeck et al. [
] reported a significant correlation between the RI and heart rate. Contrastingly, Kublickas et al. [
] found no association between the RI and heart rate. The presence of contradicting results may be attributed to the fact that these previous studies involved human subjects. Thus, heart rate was not
manipulated to an extent sufficient to generate a change in the RI. It is also possible that the results were affected by other cardiac factors that presumably could not be completely excluded. In
this simulation study, we were able to observe the effect of pulse rate by significantly changing the pulse rate and maintaining a consistent cardiac output and pulse pressure. The faster the pulse
rate, the shorter the diastolic time compared to the systolic time. When the pulse rate becomes faster, the PSV remains almost the same, while the RI decreases with an increase in the MDV, as systole
takes place without a sufficient period of diastole. Since compensatory physiological responses occur to maintain normal blood flow amount and blood pressure as the pulse rate becomes slower, the
hidden impact of pulse rate changes must be taken into consideration. In order to maintain a consistent blood flow volume despite changes in heart rate, the magnitude of the pressure ripple may
change. In our study, the RI increased in response to decreases in the pulse rate and the RI increased in response to decrease in the pulse rate with simultaneous increase in magnitude of the
pressure ripple. In contrast, when only the magnitude of the pressure ripple was changed with a consistent pulse rate, the RI remained the same. Therefore, unlike pulse pressure, the pulse rate is an
extrinsic factor that also was found to influence the RI.
This investigation simulated changes beyond the limitations of what is possible in vivo by evaluating the effect of changes in each component using a simulated electrical circuit model, and assessed
the effect of each variable independently by isolating and changing interdependent variables. Taking into consideration the fact that in vivo vascular flow is more likely than electrical current to
be affected by many variables, this analysis is limited because only some factors were evaluated. This study determined that impedance and RI were influenced by both proximal and distal resistance
and compliance. Nevertheless, a significant difference in the degree of changes in vascular resistance and compliance must be accounted for. While vascular resistance can range from zero to infinity,
the degree of changes in vascular compliance is small, despite the effects of arterial hardening, interstitial edema in the distal peripheral region, cellular infiltration, and other factors. Thus,
it may be anticipated that these factors will have impacts of different levels on Doppler waveforms and on the RI in vivo.
Summarizing the results of our study, impedance increased with increasing proximal compliance, proximal resistance, and distal resistance. Impedance decreased with increasing distal compliance. The
RI of the circuit decreased with increasing proximal compliance and resistance, and increased with increasing distal compliance and resistance. The impedance changed in the same direction as the RI
when the distal resistance was changed. However, the impedance changed in the opposite direction as the RI when the distal compliance, proximal compliance, or proximal resistance was changed. Hence,
the changes in RI were not concordant with the changes in impedance in some circumstances. In the absence of changes in intrinsic factors, such as compliance or resistance, the pulse rate can
influence the RI as an extrinsic factor. In conclusion, this study was able to identify the effect of different variables on Doppler waveforms and the RI using an electrical circuit model, and the
findings of our study are anticipated to be useful in interpreting the changes in Doppler flow waveforms in various clinical settings. | {"url":"https://www.e-ultrasonography.org/journal/view.php?number=106","timestamp":"2024-11-13T21:28:31Z","content_type":"application/xhtml+xml","content_length":"126646","record_id":"<urn:uuid:ccd14be8-43fc-4357-b456-5709f655cc53>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00299.warc.gz"} |
Export Reviews, Discussions, Author Feedback and Meta-Reviews
Submitted by Assigned_Reviewer_3
Q1: Comments to author(s). First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. (For detailed reviewing guidelines, see http:
This paper considers the problem of trading off the communication and computation cost of distributed computation and proposes a new distributed k L-2 error fitting algorithm. The proposed algorithm
can be seen as a combination of many previous speed up techniques for distributed PCA and clustering methods. However, the authors also contribute optimizations over the base methods and further
improves the communication and computation efficiency. The theoretical guarantee is sound and experiments are convincing.
What is the subroutine of A_\alpha in Algorithm 1? Moreover, algorithm 2 involves algorithm 1 which uses A_\alpha. However, it seems in Theorem 6 that both the communication and computation cost is
independent of \alpha? Could the authors provide some explanations?
Line 275, how shall we understand that for the subspace embedding, there is \|HAy\|_2 = (1\pm \epsilon)\|Ay\|_2?
In Theorem 6, the failure probability depends on s and t. I suppose t is equal to t1 and t2, which also depends on s. Could the authors optimize the probability and remove the dependence on t? As for
the current statement, suppose t1=t2=O(\log s), then the probability is 1-O(1/s+1). Is this result good enough?
Could the authors provide the explicit form for the constant c_0? Such that the readers can better understand the tightness of the bound.
My biggest concern is about the performance measure. As mentioned in line 85, it is desired to find a center set \mathcal{L}’ such that the relative error is small, d^2(P, \mathcal{L}’ ) \leq (1+\
epsilon) \min_{\mathcal{L}}d^2(P,\mathcal{L}). It is expected to see a similar relative error bound for algorithm 2 in Theorem 6.
In the experiments, the authors should provide the value of parameter \epsilon, or the values of t1 and t2.
Minor comments:
Line 93, I suggest to use “reducing communication cost” as the paragraph title, which more clearly expressed the contributions compared with “improved communication”. Similarly, line 108, “improved
computation” should also be revised.
Q2: Please summarize your review in 1-2 sentences
Though the novelty of this paper is not so significant, it provides an insightful analysis on the communication and computation trade-off for distributed algorithms. Some details were not clear, but
authors clarify them in the rebuttal well.
Submitted by Assigned_Reviewer_8
Q1: Comments to author(s). First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. (For detailed reviewing guidelines, see http:
The paper studies principal component analysis (PCA) in a distributed setting. The paper presents new algorithms and analyses for distributed PCA with reduced communication and computation cost. A
good solid paper overall; the writing is clear, novelty and significance are high. One minor comment regarding the literature survey: looking at the computational cost of SVD on each cluster (page 3,
first para) , it is not clear to me how the authors claim the cost to be min(n_i d^2, n_i^2 d). There have been several memory efficient and computationally cheaper algorithms for PCA proposed
recently; for instance see the “stochastic optimization for PCA with capped MSG” at last year’s NIPS. I believe the cost to be linear in d and overall runtime to be O(dk^2/eps^2) for an
eps-suboptimal solution. Actually there has been a surge of interest in scalable algorithms for PCA; the related work section would benefit from that survey (look for PCA papers at last year’s NIPS).
Regarding authors' response about MSG for PCA, my understanding is
(a) it is straightforward to give guarantees in the online setting, in fact MEG, which is an alternative to MSG (both are instances of mirror descent with different potential functions), was first
studied by Warmuth and Kuzmin in the online setting; an earlier paper "Stochastic optimization for PCA and PLS" by the same authors makes the connection clearer,
(b) the capped version suggests a cap of k+1 on the overall rank which makes the problem non-convex but is still tractable; this variant enjoys a computational cost of O(k^3)
Q2: Please summarize your review in 1-2 sentences
The paper studies principal component analysis (PCA) in a distributed setting. A good paper.
Submitted by Assigned_Reviewer_43
Q1: Comments to author(s). First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. (For detailed reviewing guidelines, see http:
This paper suggests a distributed PCA and k-means algorithms and more importantly rigorously proves competitive communication cost and computational efficiency for a given accuracy for these
algorithms (and even a generalized set of problems). The paper uses various methods to modify and improve the communication and computation of previous methods and the emphasis is on the theory
supporting it. It improves the communication by first projecting the data on a lower-dimensional subspace via initial approximate distributed PCA (following [9]) and then running existing algorithms
in the reduced space. It improves the computation by oblivious subspace embedding.
The paper is limited to the case of a central processor, which has to communicate with all processors (but its memory may not be shared). As far as I understand it is a standard assumption in
analysis of distributed algorithms and even with this assumption the theoretical contribution is important.
Overall the paper is well written, though beyond the introduction it takes some time to carefully understand it.
I read the other reviews and the rebuttal. I did not change the text of the above review and the quality score. However, I have changed my mind regarding the impact score. As I mentioned earlier I
have no direct expertise in distributed algorithms and I was not familiar with many of the cited works. It is thus hard for me to truly judge the impact of this work on the area. I find it
interesting and valuable for a broad audience in machine learning. However, since it seems to combine some previous ideas and has no real surprise (though still interesting), I believe its impact
score is 1 and not 2.
Q2: Please summarize your review in 1-2 sentences
This is an interesting theoretical paper on verified improved communication cost and computational efficiency of an algorithm for distributed PCA (and other related algorithms).
Q1:Author rebuttal: Please respond to any concerns raised in the reviews. There are no constraints on how you want to argue your case, except for the fact that your text should be limited to a
maximum of 6000 characters. Note however, that reviewers and area chairs are busy and may not read long vague rebuttals. It is in your own interest to be concise and to the point.
We thank the reviewer for valuable comments.
**Algorithm 1
A_\alpha can be any non-distributed algorithm that outputs an alpha-approximation for k-means (see the first line in Algorithm 1). For example, it can be the local search algorithm in the paper "a
local search approximation algorithm for k-means clustering" by Kanungo, Tapas, et al., which achieves a constant approximation factor.
Our algorithm calls the distributed k-means clustering algorithm in [3], which then uses A_\alpha as a subroutine. However, A_\alpha is non-distributed (as noted in the first line in Algorithm 1),
and it has no contribution to the communication.
**Line 275
When y runs over all vectors in R^d, Ay produces a subspace in R^n. The linear mapping H approximately preserves the l_2 norm of the vectors in this subspace.
**The failure probability in Theorem 6
Yes, t is equal to t1 and t2. Here for ease of presentation, we aim at a constant success probability. Typically, we deal with s=100, and we can choose t> 20\log s, so that the success probability
can be 0.9. Furthermore, to achieve an arbitrary failure probability \delta, we can simply set t=\log (s/\delta) and set the failure probability of each subspace embedding to be \delta/2s.
**The explicit form for the constant c_0 in Theorem 6
c_0 = \| P \|_F - \| \tilde{P} \|_F, that is, the difference of the Frobenius norm of the original data matrix and the projected data matrix. We do not provide an explicit form of c_0, since it does
not affect the final approximation bound. More precisely, as pointed out in Line 224-227, the guarantee of Theorem 3 and Theorem 6 implies that any \alpha-approximation for the projected data is a
(1+3\epsilon)\alpha-approximation for the original data. This approximation bound holds for any value of c_0.
**Performance measure in Theorem 6
As pointed out in Line 224-227, the guarantee of Theorem 3 and Theorem 6 implies that any \alpha-approximation for the projected data is a (1+3\epsilon)\alpha-approximation for the original data.
That is, the disPCA step introduces a small (1+3\epsilon) multiplicative error. Due to space limitation, we only give a concrete application of Theorem 3 on k-means clustering in Theorem 5. But the
same relative error bound can be achieved from Theorem 6 using the same argument.
**Parameter values in the experiments
We will add the descriptions of these parameter values in our later version.
We thank the reviewer for pointing to the literature. We will include these related works in our later version.
Regarding the memory efficient and computationally cheap algorithms for PCA, such as those proposed in "Stochastic Optimization for PCA with Capped MSG", we thank the reviewer for pointing them out
and we can certainly test their empirical performance. We would like to mention though that regarding the claimed cost of min(n_i d^2, n_i^2 d), this was claimed in the context of worst-case SVD
running time complexity. For the capped MSG PCA algorithms proposed above, we first want to point out that they assume an underlying distribution on the rows of the n x d matrix with a certain 4-th
moment condition. Also, the running time can be d^3 in the worst-case. See Section 2 and Section 4 of
regarding these two claims. | {"url":"https://proceedings.neurips.cc/paper_files/paper/2014/file/52947e0ade57a09e4a1386d08f17b656-Reviews.html","timestamp":"2024-11-04T14:57:16Z","content_type":"application/xhtml+xml","content_length":"17573","record_id":"<urn:uuid:fcdda18b-9fd3-4816-9cb5-ad49ec9cbed1>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00343.warc.gz"} |
Contrary-to-Duty Paradox - Filosofía Moderna, ArguméntameContrary-to-Duty Paradox - Filosofía Moderna, Arguméntame
Contrary-to-Duty Paradox
A contrary-to-duty obligation is an obligation telling us what ought to be the case if something that is wrong is true. For example: ‘If you have done something bad, you should make amends’. Doing
something bad is wrong, but if it is true that you did do something bad, it ought to be the case that you make amends. Here are some other examples: ‘If he is guilty, he should confess’, ‘If you have
hurt your friend, you should apologise to her’, ‘If she will not keep her promise to him, she ought to call him’, ‘If the books are not returned by the due date, you must pay a fine’. Alternatively,
we might say that a contrary-to-duty obligation is a conditional obligation where the condition (in the obligation) is forbidden, or where the condition is fulfilled only if a primary obligation is
violated. In the first example, he should not be guilty; but if he is, he should confess. You should not have hurt your friend; but if you have, you should apologise. She should keep her promise to
him; but if she will not, she ought to call him. The books ought to be returned by the due date; but if they are not, you must pay a fine.
Contrary-to-duty obligations are important in our moral and legal thinking. They turn up in discussions concerning guilt, blame, confession, restoration, reparation, punishment, repentance,
retributive justice, compensation, apologies, damage control, and so forth. The rationale of a contrary-to-duty obligation is the fact that most of us do neglect our primary duties from time to time
and yet it is reasonable to believe that we should make the best of a bad situation, or at least that it matters what we do when this is the case.
We want to find an adequate symbolisation of such obligations in some logical system. However, it has turned out to be difficult to do that. This is shown by the so-called contrary-to-duty
(obligation) paradox, sometimes called the contrary-to-duty imperative paradox. The contrary-to-duty paradox arises when we try to formalise certain intuitively consistent sets of ordinary language
sentences, sets that include at least one contrary-to-duty obligation sentence, by means of ordinary counterparts available in various monadic deontic logics, such as the so-called Standard Deontic
Logic and similar systems. In many of these systems the resulting sets are inconsistent in the sense that it is possible to deduce contradictions from them, or else they violate some other
intuitively plausible condition, for example that the members of the sets should be independent of each other. This article discusses this paradox and some solutions that have been suggested in the
Table of Contents
The Contrary-to-Duty Paradox
Solutions to the Paradox
Quick Solutions
Operator Solutions
Connective Solutions
Action or Agent Solutions
Temporal Solutions
References and Further Reading
1. The Contrary-to-Duty Paradox
Roderick Chisholm was one of the first philosophers to address the contrary-to-duty (obligation or imperative) paradox (Chisholm (1963)). Since then, many different versions of this puzzle have been
mentioned in the literature (see, for instance, Powers (1967), Åqvist (1967, 2002), Forrester (1984), Prakken and Sergot (1996), Carmo and Jones (2002), and Rönnedal (2012, pp. 61–66) for some
examples). Here we discuss a particular version of a contrary-to-duty (obligation) paradox that involves promises; we call this example ‘the promise (contrary-to-duty) paradox’. Most of the things we
say about this particular example can be applied to other versions. But we should keep in mind that different contrary-to-duty paradoxes might require different solutions.
Scenario I: The promise (contrary-to-duty) paradox (After Prakken and Sergot (1996))
Consider the following scenario. It is Monday and you promise a friend to meet her on Friday to help her with some task. Suppose, further, that you always meet your friend on Saturdays. In this
example the following sentences all seem to be true:
N1. (On Monday it is true that) You ought to keep your promise (and see your friend on Friday).
N2. (On Monday it is true that) It ought to be that if you keep your promise, you do not apologise (when you meet your friend on Saturday).
N3. (On Monday it is true that) If you do not keep your promise (that is, if you do not see your friend on Friday and help her out), you ought to apologise (when you meet her on Saturday).
N4. (On Monday it is true that) You do not keep your promise (on Friday).
Let N-CTD = {N1, N2, N3, N4}. N3 is a contrary-to-duty obligation (or expresses a contrary-to-duty obligation). If the condition is true, the primary obligation that you should keep your promise
(expressed by N1) is violated. N-CTD seems to be consistent as it does not seem possible to derive any contradiction from this set. Nevertheless, if we try to formalise N-CTD in so-called Standard
Deontic Logic, for instance, we immediately encounter some problems. Standard Deontic Logic is a well-known logical system described in most introductions to deontic logic (for example, Gabbay,
Horty, Parent, van der Meyden and van der Torre (eds.) (2013, pp. 36–39)). It is basically a normal modal system of the kind KD (Chellas (1980)). In Åqvist (2002) this system is called OK+. For
introductions to deontic logic, see Hilpinen (1971, 1981), Wieringa and Meyer (1993), McNamara (2010), and Gabbay et al. (2013). Consider the following symbolisation:
SDL1 Ok
SDL2 O(k → ¬a)
SDL3 ¬k → Oa
SDL4 ¬k
O is a sentential operator that takes a sentence as argument and gives a sentence as value. ‘Op’ is read ‘It ought to be (or it should be) the case that (or it is obligatory that) p’. ¬ is standard
negation and → standard material implication, well known from ordinary propositional logic. In SDL-CTD, k is a symbolisation of ‘You keep your promise (meet your friend on Friday and help her with
her task)’ and a abbreviates ‘You apologise (to your friend for not keeping your promise)’. In this symbolisation SDL1 is supposed to express a primary obligation and SDL3 a contrary-to-duty
obligation telling us what ought to be the case if the primary obligation is violated. However, the set SDL-CTD = {SDL1, SDL2, SDL3, SDL4} is not consistent in Standard Deontic Logic. O¬a is
entailed by SDL1 and SDL2, and from SDL3 and SDL4 we can derive Oa. Hence, we can deduce the following formula from SDL-CTD: Oa ∧ O¬a (‘It is obligatory that you apologise and it is obligatory that
you do not apologise’), which directly contradicts the so-called axiom D, the schema ¬(OA ∧ O¬A). (∧ is the ordinary symbol for conjunction.) ¬(OA ∧ O¬A) is included in Standard Deontic Logic
(usually as an axiom). Clearly, this sentence rules out explicit moral dilemmas. Since N-CTD seems to be consistent, while SDL-CTD is inconsistent, something must be wrong with our formalisation,
with Standard Deontic Logic or with our intuitions. In a nutshell, this puzzle is the contrary-to-duty (obligation) paradox.
2. Solutions to the Paradox
Many different solutions to the contrary-to-duty paradox have been suggested in the literature. We can try to find some alternative formalisation of N-CTD, we can try to develop some other kind of
deontic logic or we can try to show why at least some of our intuitions about N-CTD are wrong. The various solutions can be divided into five categories: quick solutions, operator solutions,
connective solutions, action or agent solutions, and temporal solutions, and these categories can be divided into several subcategories. Various answers to the puzzle are often presented as general
solutions to all different kinds of contrary-to-duty paradoxes; and if some proposal takes care of all the different kinds, this is a strong reason to accept this solution. Having said that, it might
be the case that the same approach cannot be used to solve all kinds of contrary-to-duty paradoxes.
a. Quick Solutions
In this section, we consider some quick responses to the contrary-to-duty paradox. There are at least three types of replies of this kind: (1) We can reject some axiom schemata or rules of inference
in Standard Deontic Logic that are necessary to derive our contradiction. (2) We can try to find some alternative formalisation of N-CTD in monadic deontic logic. (3) We can bite the bullet and
reject some of the original intuitions that seem to generate the paradox in the first place.
Few people endorse any of these solutions. Still, it is interesting to say a few words about them since they reveal some of the problems with finding an adequate symbolisation of contrary-to-duty
obligations. If possible, we want to be able to solve these problems.
One way of avoiding the contrary-to-duty paradox in monomodal deontic systems is to give up the axiom D, ¬(OA ∧ O¬A) (‘It is not the case that it is obligatory that A and obligatory that not-A’).
Without this axiom (or something equivalent), it is no longer possible to derive a contradiction from SDL1−SDL4. In the so-called smallest normal deontic system K (Standard Deontic Logic without the
axiom D), for instance, SDL-CTD is consistent. Some might think that there are independent reasons for rejecting D since they think there are, or could be, genuine moral dilemmas. Yet, even if this
were true (which is debatable), rejecting D does not seem to be a good solution to the contrary-to-duty paradox for several reasons.
Firstly, even if we reject axiom D, it is problematic to assume that a dilemma follows from N-CTD. We can still derive the sentence Oa ∧ O¬a from SDL-CTD in every normal deontic system, which says
that it is obligatory that you apologise and it is obligatory that you do not apologise. And this proposition does not seem to follow from N-CTD. Ideally, we want our solution to the paradox to be
dilemma-free in the sense that it is not possible to derive any dilemma of the form OA ∧ O¬A from our symbolisation of N-CTD.
Secondly, in every so-called normal deontic logic (even without the axiom D), we can derive the conclusion that everything is both obligatory and forbidden if there is at least one moral dilemma.
This follows from the fact that FA (‘It is forbidden that A’) is equivalent to O¬A (‘It is obligatory that not-A’) and the fact that Oa ∧ O¬a entails Or for any r in every normal deontic system.
This is clearly absurd. N-CTD does not seem to entail that everything is both obligatory and forbidden. Everything else equal, we want our solution to the contrary-to-duty paradox to avoid this
Thirdly, such a solution still has problems with the so-called pragmatic oddity (see below, this section).
In monomodal deontic logic, for instance Standard Deontic Logic, we can solve the contrary-to-duty paradox by finding some other formalisation of the sentences in N-CTD. Instead of SDL2 we can use k
→ O¬a and instead of SDL3 we can use O(¬k → a). Then we obtain three consistent alternative symbolisations of N-CTD. Nonetheless, these alternatives are not non-redundant (a set of sentences is
non-redundant only if no member in the set follows from the rest). O(¬k → a) follows from Ok in every so-called normal deontic logic, including Standard Deontic Logic, and k → O¬a follows from ¬k
by propositional logic. But, intuitively, N3 does not appear to follow from N1, and N2 does not appear to follow from N4. N-CTD seems to be non-redundant in that it seems to be the case that no
member of this set is derivable from the others. Therefore, we want our symbolisation of N-CTD to be non-redundant.
The so-called pragmatic oddity is a problem for many possible solutions to the contrary-to-duty paradox, including our original symbolisation in Standard Deontic Logic, that is, SDL-CTD, the same
symbolisation in the smallest normal deontic system K, and the one that uses k → O¬a instead of O(k → ¬a). In every normal deontic logic (with or without the axiom D), it is possible to derive the
following sentence from SDL-CTD: O(k ∧ a), which says that it is obligatory that you keep your promise and apologise (for not keeping your promise). Several solutions that use bimodal alethic-deontic
logic or counterfactual deontic logic (see Section 2c) as well as Castañeda’s solution (see Section 2d), for instance, also have this problem. The sentence O(k ∧ a) is not inconsistent, but it is
certainly very odd, and it does not appear to follow from N-CTD that you should keep your promise and apologise. Hence, we do not want our formalisation of N-CTD to entail this counterintuitive
conclusion or anything similar to it.
One final quick solution is to reject some intuition. The set of sentences N-CTD in natural language certainly seems to be consistent and non-redundant, it seems to be dilemma-free, and it does not
seem to entail the pragmatic oddity or the proposition that everything is both obligatory and forbidden. One possible solution to the contrary-to-duty paradox, then, obviously, is to reject some of
these intuitions about this set. If it is not consistent and non-redundant, for instance, there is nothing puzzling about the fact that our set of formalised sentences (for example SDL-CTD) lack one
or both of these properties. In fact, if this is the case, the symbolisation should be inconsistent and/or redundant.
The problem with this solution is, of course, that our intuitions seem reliable. N-CTD clearly seems to be consistent, non-redundant, and so forth. And we do not appear to have any independent
reasons for rejecting these intuitions. It might be the case that sometimes when we use contrary-to-duty talk, we really are inconsistent or non-redundant, for instance. Still, that does not mean
that we are always inconsistent or non-redundant. If N-CTD or some other set of this kind is consistent, non-redundant, and so on, we cannot use this kind of solution to solve all contrary-to-duty
paradoxes. Furthermore, it seems that we should not reject our intuitions if there is some better way to solve the contrary-to-duty paradox. So, let us turn to the other solutions. (For more
information on quick solutions to the contrary-to-duty paradox, see Rönnedal (2012, pp. 67–98).)
b. Operator Solutions
We shall begin by considering the operator solution. The basic idea behind this kind of solution is that the contrary-to-duty paradox, in some sense, involves different kinds of obligations or
different kinds of ‘ought-statements’. Solutions of this type have, for example, been discussed by Åqvist (1967), Jones and Pörn (1985), and Carmo and Jones (2002).
In Standard Deontic Logic a formula of the form OA ∧ O¬A is derivable from SDL-CTD; but OA ∧ O¬A is not consistent with the axiom D. If, however, there are different kinds of obligations,
symbolised by distinct obligation operators, it may be possible to formalise our contrary-to-duty scenarios so as to avoid a contradiction. Suppose, for example, that there are two obligation
operators O1 and O2 that represent ideal and actual obligations, respectively. Then, it is possible that instead of Oa ∧ O¬a we may derive the formula O1¬a ∧ O2a from the symbolisation of our
scenarios. But O1¬a ∧ O2a is not inconsistent with the axiom D; O1¬a ∧ O2a says that it is ‘ideally-obligatory’ that you do not apologise and it is ‘actually-obligatory’ that you apologise. If we
cannot derive any other formula of the form OA ∧ O¬A, it is no longer possible to derive a contradiction from our formalisation. Furthermore, such a solution seems to be dilemma-free, and it does
not seem to be possible to derive the conclusion that everything is both obligatory and forbidden from a set of sentences that introduces different kinds of obligations.
An example: Carmo and Jones’s operator solution
Perhaps the most sophisticated version of this kind of solution is presented by Carmo and Jones (2002). Let us now discuss their answer to the contrary-to-duty paradox to illustrate this basic
approach. To understand their view, we must first explain some formal symbols. Carmo and Jones use a dyadic, conditional obligation operator O(…/…) to represent conditional obligations. Intuitively,
‘O(B/A)’ says that in any context in which A is a fixed or unalterable fact, it is obligatory that B, if this is possible. They use two kinds of monadic modal operators: □ and ◇, and □ and ◇.
Intuitively, □ is intended to capture that which—in a particular situation—is actually fixed, or unalterable, given (among other factors) what the agents concerned have decided to do and not to do.
So, □A says that it is fixed or unalterable that A. ◇ is the dual (possibility operator) of □. Intuitively, □ is intended to capture that which—in a particular situation—is not only actually fixed,
but would still be fixed even if different decisions had been made, by the agents concerned, regarding how they were going to behave. So, □A says that it is necessary, fixed or unalterable that A, no
matter what the agents concerned intend to do or not to do. ◇ is the dual (possibility operator) of □. They also introduce two kinds of derived obligation sentences, OaB and OiB, pertaining to actual
obligations and ideal obligations, respectively. OaB is read ‘It is actually obligatory that B’ or ‘It actually ought to be the case that B’, and OiB is read ‘It is ideally obligatory that B’ or ‘It
ideally ought to be the case that B’. T is (the constant) Verum; it is equivalent to some logically true sentence (such as, it is not the case that p and not-p). In short, we use the following
O(B/A) In any context in which A is fixed, it is obligatory that B, if this is possible.
OaB It is actually obligatory that B.
OiB It is ideally obligatory that B.
◇A It is actually possible that A.
◇A It is potentially possible that A.
□A It is not actually possible that not-A.
□A It is not potentially possible that not-A.
T Verum
Before we consider Carmo and Jones’s actual solution to the contrary-to-duty paradoxes, let us say a few words about the formal properties of various sentences in their language. For more on the
syntax and semantics of Carmo and Jones’s system, see Carmo and Jones (2002). □ (and ◇) is a normal modal operator of kind KT, and □ (and ◇) is a normal modal operator of kind KD (Chellas (1980)). □A
is stronger than □A, and ◇A is stronger than ◇A. There is, according to Carmo and Jones, an intimate conceptual connection between the two notions of derived obligation, on the one hand, and the two
notions of necessity/possibility. The system includes □(A ↔ B) → (OaA ↔ OaB) and □(A ↔ B) → (OiA ↔ OiB) for example. The system also contains the following restricted forms of so-called factual
detachment: (O(B/A) ∧ □A ∧ ◇B ∧ ◇¬B) → OaB, and (O(B/A) ∧ □A ∧ ◇B ∧ ◇¬B) → OiB. We can now symbolise N-CTD in the following way:
O1 O(k/T)
O2 O(¬a/k)
O3 O(a/¬k)
O4 ¬k
We use the same propositional letters as in Section 1. Furthermore, we assume that the following ‘facts’ hold: □¬k, ◇(k ∧ ¬a), ◇(k ∧ a), ¬a ∧ ◇a ∧ ◇¬a. In other words, we assume that you decide
not to keep your promise, but that it is potentially possible for you to keep your promise and not apologise and potentially possible for you to keep your promise and apologise, and that you have not
in fact apologised, although it is still actually possible that you apologise and actually possible that you do not apologise. From this, we can derive the following sentences in Carmo and Jones’s
system: Oi(k ∧ ¬a) and Oaa; that is, ideally it ought to be that you keep your promise (and help your friend) and do not apologise, but it is actually obligatory that you apologise. Furthermore, the
obligation to keep your promise is violated and the ideal obligation to keep your promise and not apologise is also violated. Still, we cannot derive any contradiction. From Oi(k ∧ ¬a) we cannot
derive any actual obligation not to apologise. Consequently, we can avoid the contrary-to-duty paradox.
Arguments for Carmo and Jones’s operator solution
According to Carmo and Jones, any adequate solution to the contrary-to-duty paradox should satisfy certain requirements. The representation of N-CTD (and similar sets of sentences) should be: (i)
consistent, and (ii) non-redundant, in the sense that the formalisations of the members of N-CTD should be logically independent. The solution should be (iii) applicable to (at least apparently)
action- and timeless contrary-to-duty examples (see Section 2d and Section 2e for some examples). (iv) The logical structures of the two conditional obligations in N-CTD (and similar sets of
sentences) should be analogous. Furthermore, we should have (v) the capacity to derive actual and (vi) ideal obligations (from (the representation of) N-CTD), (vii) the capacity to represent the fact
that a violation of an obligation has occurred, and (viii) the capacity to avoid the pragmatic oddity (see Section 2a above for a description of this problem). Finally, (ix) the assignment of logical
form to a sentence in a contrary-to-duty scenario should be independent of the assignment of logical form to the other sentences. Carmo and Jones’s solution satisfies all of these requirements. This
is a good reason to accept their approach. Nevertheless, there are also some serious problems with the suggested solution. We now consider two puzzles.
Arguments against Carmo and Jones’s operator solution
Even though Carmo and Jones’s operator solution is quite interesting, it has not generated much discussion. In this section, we consider two arguments against their solution that have not been
mentioned in the literature.
Argument 1. Carmo and Jones postulate several different unconditional operators. But ‘ought’ (and ‘obligatory’) does not seem to be ambiguous in the sense the solution suggests. The derived ‘ideal’
obligation to keep the promise and not to apologise does not seem to be of another kind than the derived ‘actual’ obligation to apologise. The ‘ideal’ obligation is an ordinary unconditional
obligation to keep your promise and not apologise that holds as long as it is still possible for you to keep your promise and not apologise. And the ‘actual’ obligation is an ordinary unconditional
obligation that becomes ‘actual’ as soon as it is settled that you will not keep your promise. Both obligations are unconditional and both obligations are action guiding. The ‘ought’ in the sentence
‘You ought to keep your promise and not apologise’ does not have another meaning than the ‘ought’ in the sentence ‘You ought to apologise’. The only difference between the obligations is that they
are in force at different times. Or, at least, so it seems. Furthermore, if the conditional obligation sentences N2 and N3 should be symbolised in the same way, if they have the same logical form, as
Carmo and Jones seem to think, it also seems reasonable to assume that the derived unconditional obligation sentences should be symbolised by the same kind of operator.
Argument 2. Carmo and Jones speak about two kinds of obligations: actual obligations and ideal obligations. But it is unclear which of these, if either, they think is action guiding. We have the
following alternatives:
(i) Both actual and ideal obligations are action guiding.
(ii) Neither actual nor ideal obligations are action guiding.
(iii) Ideal but not actual obligations are action guiding.
(iv) Actual but not ideal obligations are action guiding.
Yet, all of these alternatives are problematic. It seems that (i) cannot be true. For in Carmo and Jones’s system, we can derive Oi(k ∧ ¬a) and Oaa from the symbolisation of N-CTD. Still, there is
no possible world in which it is true both that you keep your promise and not apologise and that you apologise. How, then, can both actual and ideal obligations be action guiding? If we assume that
neither actual nor ideal obligations are action guiding, we can avoid this problem, but then the value of Carmo and Jones’s solution is seriously limited. We want, in every situation, to know what we
(actually) ‘ought to do’ in a sense of ‘ought to do’ that is action guiding. Nevertheless, according to (ii), neither ideal nor actual obligations are action guiding. In this reading of the text,
Carmo and Jones’s system cannot give us any guidance; it does not tell us what we ‘ought to do’ in what seems to be the most interesting sense of this expression. True, the solution does say
something about ideal and actual obligations, but why should we care about that? So, (ii) does not appear to be defensible. If it is the ideal and not the actual obligations that are supposed to be
action guiding, it is unclear what the purpose of speaking about ‘actual’ obligations is. If actual obligations are supposed to have no influence on our behaviour, they seem to be redundant and serve
no function. Moreover, if this is true, why should we call obligations of this kind ‘actual’? Hence, (iii) does not appear to be true either. The only reasonable alternative, therefore, seems to be
to assume that it is the actual and not the ideal obligations that are action guiding. Yet, this assumption is also problematic, since it has some counterintuitive consequences. If you form the
intention not to keep your promise, if you decide not to help your friend, your actual obligation is to apologise, according to Carmo and Jones. You have an ideal obligation to keep your promise and
not apologise, but this obligation is not action guiding. So, it is not the case that you ought to keep your promise and not apologise in a sense that is supposed to have any influence on your
behaviour. However, intuitively, it seems to be true that you ought to keep your promise and not apologise as long as you still can keep your promise; as long as this is still (potentially) possible,
this seems to be your ‘actual’ obligation, the obligation that is action guiding. As long as you can help your friend (and not apologise), you do not seem to have an actual (action-guiding)
obligation to apologise. The fact that you have decided not to keep your promise does not take away your (actual, action-guiding) obligation to keep your promise (and not apologise); you can still
change your mind. We cannot avoid our obligations just by forming the intention not to fulfil them. This would make it too easy to get rid of one’s obligations. Consequently, it seems that (iv) is
not true either. And if this is the case, Carmo and Jones’s solution is in deep trouble, despite its many real virtues.
c. Connective Solutions
We turn now to our second category of solutions to the contrary-to-duty paradox. In Section 1, we interpreted the English construction ‘if, then’ as material implication. But there are many other
possible readings of this expression. According to the connective solutions to the contrary-to-duty paradox, ‘if, then’ should be interpreted in some other way, not as a material implication. The
category includes at least four subcategories: (1) the modal (or strict implication) solution according to which ‘if, then’ should be interpreted as strict or necessary implication; (2) the
counterfactual (or subjunctive) solution according to which ‘if, then’ should be interpreted as some kind of subjunctive or counterfactual conditional; (3) the non-monotonic solution according to
which we should use some kind of non-monotonic logic to symbolise the expression ‘if, then’; and (4) the (primitive) dyadic deontic solution according to which we should develop a new kind of dyadic
deontic logic with a primitive, two-place sentential operator that can be used to symbolise conditional norms.
According to the first solution, which we call the modal solution, ‘if, then’ should be interpreted as strict, that is, necessary implication, not as material implication. N2 should, for example, be
symbolised in the following way: k => O¬a (or perhaps as O(k => ¬a)), and N3 in the following way: ¬k => Oa (or perhaps as O(¬k => a)), where => stands for strict implication and the
propositional letters are interpreted as in Section 1. A => B is logically equivalent to □(A → B) in most modal systems. □ is a sentential operator that takes one sentence as argument and gives one
sentence as value. ‘□A’ says that it is necessary that A. The set {Ok, k => O¬a, ¬k => Oa, ¬k} is consistent in some alethic deontic systems (systems that combine deontic and modal logic). So, if
we use this symbolisation, it might be possible to avoid the contrary-to-duty paradox. A solution of this kind is discussed by Mott (1973), even though Mott seems to prefer the counterfactual
solution. For more on this kind of approach and for some problems with it, see Rönnedal (2012, pp. 99–102).
According to the second solution, the counterfactual solution, the expression ‘if, then’ should be interpreted as some kind of counterfactual or subjunctive implication. Mott (1973) and Niles (1997),
for example, seem to defend a solution of this kind, while Tomberlin (1981) and Decew (1981), for instance, criticise it. We say more about the counterfactual solution below (in this section).
According to the third solution, the non-monotonic solution, we should use some kind of non-monotonic logic to symbolise the expression ‘if, then’. A solution of this kind has been discussed by
Bonevac (1998). Bonevac introduces a new, non-monotonic, defeasible or generic conditional, >, a sentential operator that takes two sentences as arguments and gives one sentence as value. A > B is
true in a possible world, w, if and only if B holds in all A-normal worlds relative to w. This conditional does not support ordinary modus ponens, that is, B does not follow from A and A > B. It only
satisfies defeasible modus ponens, that B follows non-monotonically from A and A > B in the absence of contrary information. If we symbolise N2 as O(k > ¬a) (or perhaps as k > O¬a), and N3 as ¬k >
Oa (and N1 and N4 as in SDL-CTD), we can no longer derive a contradiction from this set in Bonevac’s system. O¬a follows non-monotonically from Ok and O(k > ¬a), and Oa follows non-monotonically
from ¬k and ¬k > Oa. But from {Ok, O(k > ¬a), ¬k > Oa, ¬k} we can only derive Oa non-monotonically. According to Bonevac, so-called factual detachment takes precedence over so-called deontic
detachment. Hence, we can avoid the contrary-to-duty paradox.
A potential problem with this kind of solution is that it is not obvious that it can explain the difference between violation and defeat. If you will not see your friend and help her out, the
obligation to keep your promise will be violated. It is not the case that this obligation is defeated, overridden or cancelled. The same seems to be true of the derived obligation that you should not
apologise. If you do apologise, the derived (unconditional) obligation that you should not apologise is violated. It is not the case that one of the conditional norms in N-CTD defeat or override the
other. Nor is it the case that they cancel each other out. Or, at least, so it seems. Ideally, we want our solution to reflect the idea that the primary obligation in a contrary-to-duty paradox has
been violated and not defeated. Likewise, we want to be able to express the idea that the derived unconditional obligation not to apologise has been violated if you apologise. However, according to
Bonevac, we cannot derive O¬a from {Ok, O(k > ¬a), ¬k > Oa, ¬k}, not even non-monotonically. This approach to the contrary-to-duty paradoxes does not appear to have generated that much
discussion. But the non-monotonic paradigm is interesting and Bonevac’s paper provides a fresh view on the paradox.
According to the fourth solution, the (pure) dyadic deontic solution, we should develop a new kind of dyadic deontic logic with a primitive, two-place sentential operator that can be used to
symbolise conditional norms. Sometimes O(B/A) is used to symbolise such norms, sometimes O[A]B, and sometimes AOB. Here we use the following construction: O[A]B. ‘O[A]B’ is read ‘It is obligatory (or
it ought to be the case) that B given A’. This has been one of the most popular solutions to the contrary-to-duty paradox and it has many attractive features. Nevertheless, we do not say anything
more about it in this article, since we discuss a temporal version of the dyadic deontic solution in Section 2e. For more on this kind of approach and for some problems with it, see Åqvist (1984,
1987, 2002) and Rönnedal (2012, pp. 112–118). For more on dyadic deontic logic, see Rescher (1958), von Wright (1964), Danielsson (1968), Hansson (1969), van Fraassen (1972, 1973), Lewis (1974), von
Kutschera (1974), Greenspan (1975), Cox (Al-Hibri) (1978), and van der Torre and Tan (1999). Semantic tableau systems for dyadic deontic logic are developed by Rönnedal (2009).
An example: The counterfactual solution
We now consider the counterfactual solution to the contrary-to-duty paradox and some arguments for and against this approach. Mott (1973) and Niles (1997), for example, are sympathetic to this kind
of view, while Tomberlin (1981) and Decew (1981), for instance, criticise it. Some of the arguments in this section have previously been discussed in Rönnedal (2012, pp. 102–106). For more on
combining counterfactual logic and deontic logic, see the Appendix, Section 7, in Rönnedal (2012), Rönnedal (2016) and Rönnedal (2019); the tableau systems that are used in this section are described
in those works.
In a counterfactual deontic system, a system that combines counterfactual logic and deontic logic, we can symbolise the concept of a conditional obligation in at least four interesting ways: (A □→
OB), O(A □→ B), (A □⇒ OB) and O(A □⇒ B). □→ (and □⇒) is a two-place, sentential operator that takes two sentences as arguments and gives one sentence as value. ‘A □→ B’ (and ‘A □⇒ B’) is often read
‘If A were the case, then B would be the case’. (The differences between □→ and □⇒ are unimportant in this context and as such we focus on □→.) So, maybe we can use some of these formulas to
symbolise contrary-to-duty obligation sentences and avoid the contrary-to-duty paradox. Let us now consider one possible formalisation of N-CTD that seems to be among the most plausible in
counterfactual deontic logic. In the discussion of Argument 2 in this section (see below), we consider two more attempts.
CF1 Ok
CF2 k □→ O¬a
CF3 ¬k □→ Oa
CF4 ¬k
Let CF-CTD = {CF1, CF2, CF3, CF4}. From CF3 and CF4 we can deduce Oa, but it is not possible to derive O¬a from CF1 and CF2, at least not in most reasonable counterfactual deontic systems. Hence, we
cannot derive a contradiction in this way.
Arguments for the counterfactual solution
This solution to the contrary-to-duty paradox is attractive for many reasons. (1) CF-CTD is consistent, as we already have seen. (2) The set is non-redundant. CF3 does not seem to be derivable from
CF1, and CF2 does not seem to be derivable from CF4 in any interesting counterfactual deontic logic. (3) The set is dilemma-free. We cannot derive Oa ∧ O¬a from CF-CTD, nor anything else of the form
OA ∧ O¬A. (4) We cannot derive the proposition that everything is both obligatory and forbidden from CF-CTD. (5) We can easily express the idea that the primary obligation to keep the promise has
been violated in counterfactual deontic logic. This is just the conjunction of CF1 and CF4. (6) All conditional obligations can be symbolised in the same way. CF2 has the same logical form as CF3.
(7) We do not have to postulate several different kinds of unconditional obligations. The unconditional obligation to keep the promise is the same kind of obligation as the derived unconditional
obligation to apologise. This is a problem for Carmo and Jones’s operator solution (Section 1 above). (8) The counterfactual solution can take care of apparently actionless contrary-to-duty
paradoxes. Such paradoxes are a problem for the action or agent solutions (see Section 2d). (9) The counterfactual solution can perhaps take care of apparently timeless contrary-to-duty paradoxes.
Such paradoxes are a problem for the temporal solution (see Section 2e). (Whether or not this argument is successful is debatable.) (10) From CF3 and CF4 we can derive the formula Oa, which says that
you should apologise, and, intuitively, it seems that this proposition follows from N3 and N4 (at least in some contexts). (11) In counterfactual deontic logic a conditional obligation can be
expressed by a combination of a counterfactual conditional and an ordinary (unconditional) obligation. We do not have to introduce any new primitive dyadic deontic operators. According to the dyadic
and temporal dyadic deontic solutions (see above in this section and Section 2e below), we need some new primitive dyadic deontic operator to express conditional obligations.
Hence, the counterfactual solution to the contrary-to-duty paradox seems to be among the most plausible so far suggested in the literature. Nonetheless, it also has some serious problems. We now
consider four arguments against this solution. For more on some problems, see Decew (1981) and Tomberlin (1981), and for some responses, see Niles (1997).
Arguments against the counterfactual solution
Argument 1. The symbol □→ has often been taken to represent conditional sentences in the subjunctive, not in the indicative form. That is, A □→ B is read ‘If it were the case that A, then it would be
the case that B’, not ‘If A is the case, then B is the case’ (or ‘If A, then B’). So, the correct reading of k □→ O¬a seems to be ‘If you were to keep your promise, then it would be obligatory that
you do not apologise’, and the correct reading of ¬k □→ Oa seems to be ‘If you were not to keep your promise, then it would be obligatory that you apologise’. If this is true, the formal sentences
CF2 and CF3 do not correctly reflect the meaning of the English sentences N2 and N3, because the English sentences are not in the subjunctive form.
Here is a possible response to this argument. A □→ B might perhaps be used to symbolize indicative conditionals and not only subjunctive conditionals, and if this is the case, we can avoid this
problem. Furthermore, maybe the formulation in natural language is not satisfactory. Maybe the English sentences in N-CTD are more naturally formulated in the subjunctive form. So, ‘It ought to be
that if you keep your promise, you do not apologise’ is taken to mean the same thing as ‘If you were to keep your promise, then it would be obligatory that you do not apologise’; and ‘If you do not
keep your promise, you ought to apologise’ is taken to say the same thing as ‘If you were not to keep your promise, then it would be obligatory that you apologise’. And if this is the case, the
symbolisations might very well be reasonable. To decide whether this is the case or not, it seems that we have to do much more than just look at the surface structure of the relevant sentences. So,
this argument—while interesting—does not seem to be conclusive.
Argument 2. In counterfactual deontic logic, N2 can be interpreted in (at least) two ways: k □→ O¬a (CF2) or O(k □→ ¬a) (CF2(b)). Faced with the choice between two plausible formalisations of a
certain statement, we ought to choose the stronger one. CF2(b) is stronger than CF2. So, N2 should be symbolized by CF2(b) and not by CF2. Furthermore, CF2(b) corresponds better with the surface
structure of N2 than CF2; in N2 the expression ‘It ought to be that’ has a wide and not a narrow scope. This means that N-CTD should be symbolized in the following way:
CF1 Ok
CF2(b) O(k □→ ¬a)
CF3 ¬k □→ Oa
CF4 ¬k
Let C2F-CTD = {CF1, CF2(b), CF3, CF4}. Yet, in this reading, the paradox is reinstated, for C2F-CTD is inconsistent in most plausible counterfactual deontic systems. (An argument of this kind against
a similar contrary-to-duty paradox can be found in Tomberlin (1981).) Let us now prove this. (In the proofs below, we use some semantic tableau systems that are described in the Appendix, Section 7,
in Rönnedal (2012); temporal versions of these systems can be found in Rönnedal (2016). All rules that are used in our deductions are explained in these works.) First, we establish a derived rule,
rule DR8, which is used in our proofs. This rule is admissible in any counterfactual (deontic) system that contains the tableau rule Tc5.
Derivation of DR8.
(1) A □→ B, i
(2) ¬(A → B), i [CUT] (3) A → B, i [CUT]
(4) A, i [2, ¬→]
(5) ¬B, i [2, ¬→]
(6) irAi [4, Tc5]
(7) B, i [1, 6, □→]
(8) * [5, 7]
Now we are in a position to prove that C2F-CTD is inconsistent. To prove that a set of sentences A1, A2, …, An is inconsistent in a tableau system S, we construct an S-tableau which begins with every
sentence in this set suffixed in an appropriate way, such as A1, 0, A2, 0, …, An, 0. If this tableau is closed, that is, if every branch in it is closed, the set is inconsistent in S. (‘MP’ stands
for the derived tableau rule Modus Ponens.)
(1) Ok, 0
(2) O(k □→ ¬a), 0
(3) ¬k □→ Oa, 0
(4) ¬k, 0
(5) ¬k → Oa, 0 [3, DR8]
(6) Oa, 0 [4, 5, MP]
(7) 0s1 [T − dD]
(8) k, 1 [1, 7, O]
(9) k □→ ¬a, 1 [2, 7, O]
(10) a, 1 [6, 7, O]
(11) k → ¬a, 1 [9, DR8]
(12) ¬a, 1 [8, 11, MP]
(13) * [10, 12]
So, the counterfactual solution is perhaps not so plausible after all. Nevertheless, this argument against this solution is problematic for at least two different reasons.
(i) It is not clear in what sense CF2(b) is ‘stronger’ than CF2. Tomberlin does not explicitly discuss what he means by this expression in this context. Usually one says that a formula A is
(logically) stronger than a formula B in a system S if and only if A entails B but B does not entail A in S. In this sense, CF2(b) does not seem to be stronger than CF2 in any interesting
counterfactual deontic logic. But perhaps one can understand ‘stronger’ in some other sense in this argument. CF2(b) is perhaps not logically stronger than CF2, but it is a more natural
interpretation of N2 than CF2. Recall that N2 says that it ought to be that if you keep your promise, then you do not apologise. This suggests that the correct symbolisation of N2 is O(k □→ ¬a), not
k □→ O¬a; in other words, the O-operator should have a wide and not a narrow scope.
(ii) Let us grant that O(k □→ ¬a) is stronger than k □→ O¬a in the sense that the former is more natural than the latter. Furthermore, it is plausible to assume that if two interpretations of a
sentence are reasonable one should choose the stronger or more natural one (as a pragmatic rule and ceteris paribus). Hence, CF2 should be symbolised as O(k □→ ¬a) and not as k □→ O¬a. Here is a
possible counterargument. Both O(k □→ ¬a) and k □→ O¬a are reasonable interpretations of N2. So, ceteris paribus we ought to choose O(k □→ ¬a). But if we choose O(k □→ ¬a) the resulting set
C2F-CTD is inconsistent. Thus, in this case, we cannot (or should not) choose O(k □→ ¬a) as a symbolisation of N2. We should instead choose the narrow scope interpretation k □→ O¬a. Furthermore, it
is not obvious that N2 says something other than the following sentence: ‘If you keep your promise, it ought to be the case that you do not apologise’ (N2b). And here k □→ O¬a seems to be a more
natural symbolisation. Even if N2 and N2b are not equivalent, N2b might perhaps express our original idea better than N2. Consequently, this argument does not seem to be conclusive. However, it does
seem to show that C2F-CTD is not a plausible solution to the contrary-to-duty paradox.
What happens if we try some other formalisation of N3? Can we avoid this problem then? Let us consider one more attempt to symbolize N-CTD in counterfactual deontic logic.
CF1 Ok
CF2(b) O(k □→ ¬a)
CF3(b) O(¬k □→ a)
CF4 ¬k
Let C3F-CTD = {CF1, CF2(b), CF3(b), CF4}. In this set N3 is once more represented by a sentence where the O-operator has wide scope. From this set we can derive O¬a from CF1 and CF2(b), but not Oa
from CF3(b) and CF4. The set is not inconsistent.
Yet, this solution is problematic for another reason. All of the following sentences seem to be true: O(k □→ ¬a), k □→ O¬a, ¬k □→ Oa, but O(¬k □→ a) seems false. According to the standard
truth-conditions for counterfactuals, A □→ B is true in a possible world w if and only if B is true in every possible world that is as close as (as similar as) possible to w in which A is true; and
OA is true in a possible world w if and only if A is true in every possible world that is deontically accessible from w. If we think of the truth-conditions in this way, O(¬k □→ a) is true in w (our
world) if and only if ¬k □→ a is true in all ideal worlds (in all possible worlds that are deontically accessible from w), that is, if and only if: in every ideal world w’ deontically accessible
from w, a is true in all the worlds that are as close to w’ as possible in which ¬k is true. But in all ideal worlds you keep your promise, and in all ideal worlds, if you keep your promise, you do
not apologise. From this it follows that in all ideal worlds you do not apologise. Accordingly, in all ideal worlds you keep your promise and do not apologise. Take an ideal world, say w’. In the
closest ¬k world(s) to w’, ¬a seems to be true (since ¬a is true in w’). If this is correct, ¬k and ¬a is true in one of the closest ¬k worlds to w’. So, ¬k □→ a is not true in w’. Hence, O(¬
k □→ a) is not true in w (in our world). In conclusion, if this argument is sound, we cannot avoid the contrary-to-duty paradox by using the symbolisation C3F-CTD.
Argument 3. We turn now to the pragmatic oddity. We have mentioned that this is a problem for some quick solutions and for the modal solution. It is also a problem for the counterfactual solution. In
every counterfactual deontic system that includes the tableau rule Tc5 (see Rönnedal (2012, p. 160)), and hence the schema (A □→ B) → (A → B), the sentence O(k ∧ a) is derivable from CF-CTD. This is
odd since it does not seem to follow that it ought to be that you keep your promise and apologise (for not keeping your promise) from N-CTD and since it seems that (A □→ B) → (A → B) should hold in
every reasonable counterfactual logic. The following semantic tableau shows that O(k ∧ a) is derivable from CF-CTD (in most counterfactual deontic systems).
(1) Ok, 0
(2) k □→ O¬a, 0
(3) ¬k □→ Oa, 0
(4) ¬k, 0
(5) ¬O(k ∧ a), 0
(6) P¬(k ∧ a), 0 [5, ¬O]
(7) 0s1 [6, P]
(8) ¬(k ∧ a), 1 [6, P]
(9) k, 1 [1, 7, O]
(10) ¬k → Oa, 0 [3, DR8]
(11) ¬¬k, 0 [10, →] (12) Oa, 0 [10, →]
(13) * [4, 11] (14) a, 1 [12, 7, O]
(15) ¬k, 1 [8, ¬∧] (16) ¬a, 1 [8, ¬∧]
(17) * [9, 15] (18) * [14, 16]
Argument 4. According to the counterfactual solution, so-called factual detachment holds unrestrictedly, that is, OB always follows from A and A □→ OB. This view is criticised by Decew (1981). From
the proposition that I will not keep my promise and the proposition that if I will not keep my promise I ought to apologise, it does not follow that I ought to apologise. For as long as I still can
keep my promise I ought to keep it, and if I keep it, then I should not apologise. According to Decew, it is not enough that a condition is true, it must be ‘unalterable’ or ‘settled’ before we are
justified in detaching the unconditional obligation. See also Greenspan (1975). If this is correct, the counterfactual solution cannot, in itself, solve all contrary-to-duty paradoxes.
d. Action or Agent Solutions
Now, let us turn to the action or agent solutions. A common idea behind most of these solutions is that we should make a distinction between what is obligatory, actions or so-called practitions, and
the circumstances of obligations. We should combine deontic logic with some kind of action logic or dynamic logic. And when we do this, we can avoid the contrary-to-duty paradox. Three subcategories
deserve to be mentioned: (1) Castañeda’s solution, (2) the Stit solution, and (3) the dynamic deontic solution.
Castañeda has developed a unique approach to deontic logic. According to him, any useful deontic calculus must contain two types of sentences even at the purely sentential level. One type is used to
symbolise the indicative clauses—that speak about the conditions and not the actions that are considered obligatory—in a conditional obligation, and the other type is used to symbolise the infinitive
clauses that speak about the actions that are considered obligatory and not the conditions. Castañeda thinks that the indicative components, but not the infinitive ones, allow a form of (internal)
modus ponens. From N3 and N4 we can derive the conclusion that you ought to apologise, but from N1 and N2 we cannot derive the conclusion that you ought not to apologise. Hence, we avoid the
contradiction. For more on this approach, see, for instance, Castañeda (1981). For a summary of some arguments against Castañeda’s solution, see Carmo and Jones (2002); see also Powers (1967).
According to the Stit solution, deontic logic should be combined with some kind of Stit (Seeing to it) logic. However, Stit logic is often combined with temporal logic. So, this approach can also be
classified as a temporal solution. We say a few more words about this kind of view in Section 2e.
To illustrate this type of solution to the contrary-to-duty paradox, let us now discuss the dynamic deontic solution and some problems with this particular way of solving the puzzle.
An example: The dynamic deontic solution
According to the dynamic deontic proposal, we can solve the contrary-to-duty paradox if we combine deontic logic with dynamic logic. A view of this kind is suggested by Meyer (1988), which includes a
dynamic deontic system. We will now consider this solution and some arguments for and against it. Dynamic deontic logic is concerned with what we ought to do rather than with what ought to be, and
the sentences in N-CTD should be interpreted as telling us what you ought to do. The solution is criticised by Anglberger (2008).
Dynamic deontic logic introduces some new notions: α stands for some action, the formula [α]A denotes that performance of the action α (necessarily) leads to a state (or states) where A holds, where
A is any sentence and [α] is similar to an ordinary necessity-like modal operator (the so-called box). The truth-conditions of [α]A are as follows: [α]A is true in a possible world w if and only if
all possible worlds w’ with Rα(w, w’) satisfy A. Rα is an accessibility relation Rα ⊆ W ⨯ W associated with α, where W is the set of possible worlds or states. Rα(w, w’) says that from w one (can)
get into state w’ by performing α. Fα, to be read ‘the action α is forbidden’, can be defined as Fα ↔ [α]V (call this equivalence Def F; ↔ is ordinary material equivalence), where V is a special
atomic formula denoting violation, in other words, that some action is forbidden if and only if doing the action leads to a state of violation. Oα, to be read ‘the action α is obligatory’ or ‘it is
obligatory to perform the action α’, can now be defined as Oα ↔ F(-α) (call this equivalence Def O), where ‐α stands for the non-performance of α. Two further formulas should be explained: α ; β
stands for ‘the performance of α followed by β’, and α & β stands for ‘the performance of α and β (simultaneously)’.
The first three sentences in N-CTD can now be formalised in the following way in dynamic deontic logic:
DDLF1 Oα
DDLF2 [α]O‐β
DDLF3 [‐α]Oβ
Let DDLF-CTD = {Oα, [α]O‐β, [‐α]Oβ}, where α stands for the act of keeping your promise (and helping your friend) and β for the act of apologising. In dynamic deontic logic, it is not possible to
represent (the dynamic version of) N4, which states that the act of keeping your promise is not performed. This should perhaps make one wonder whether the formalisation is adequate (see Argument 1
below in this section). Yet, if we accept this fact, we can see that the representation solves the contrary-to-duty paradox. From DDLF-CTD it is not possible to derive a contradiction. So, in dynamic
deontic logic we can solve the contrary-to-duty paradox.
Arguments for the dynamic solution
Meyer’s system is interesting and there seem to be independent reasons to want to combine deontic logic with some kind of action logic or dynamic logic. The symbolisations of the sentences in N-CTD
seem intuitively plausible. DDLF-CTD is consistent; the set is dilemma-free and we cannot derive the proposition that everything is both obligatory and forbidden from it. We can assign formal
sentences with analogous structures to all conditional obligations in N-CTD. We do not have to postulate several different types of unconditional obligations. Furthermore, from DDLF-CTD it is
possible to derive O(α ; ‐β) ∧ [‐α](V ∧ Oβ), which says that it is obligatory to perform the sequence α (keeping your promise) followed by ‐β (not-apologising), and if α has not been done (that is,
if you do not keep your promise), one is in a state of violation and it is obligatory to do β; that is, it is obligatory to apologise. This conclusion is intuitively plausible. Nevertheless, there
are also some potential and quite serious problems with this kind of solution.
Arguments against the dynamic solution
We consider four arguments against the dynamic solution to the contrary-to-duty paradox in this section. Versions of the second and the third can be found in Anglberger (2008). However, as far as we
know, Argument 1 and Argument 4 have not been discussed in the literature before. According to the first argument, we cannot symbolise all premises in dynamic deontic logic, which is unsatisfactory.
If we try to avoid this problem, we run into the pragmatic oddity once again. According to the second argument, the dynamic formalisations of the contrary-to-duty sets are not non-redundant.
According to the third, it is provable in Meyer’s system PDeL + ¬O(α & ‐α) that no possible action is forbidden, which is clearly implausible. ‘¬O(α & ‐α)’ says that it is not obligatory to perform
α and non-α. According to the fourth argument, there seem to be action- and/or agentless contrary-to-duty paradoxes, which seem impossible to solve in dynamic deontic logic.
Argument 1. We cannot symbolise all sentences in N-CTD in dynamic deontic logic; there is no plausible formalisation of N4. This is quite problematic. If the sentence N4 cannot be represented in
dynamic deontic logic, how can we then claim that we have solved the paradox? Meyer suggests adding a predicate DONE that attaches to action names (Meyer (1988)). Then, DONE(α) says that action α has
been performed. If we add this predicate, we can symbolise all sentences in N-CTD. Sentence N4 is rendered DONE(-α). Meyer appears to think that (DONE(α)→A) is derivable from [α]A. This seems
plausible. Still, if we assume this, we can deduce a dynamic counterpart of the pragmatic oddity from our contrary-to-duty sets. To prove this, we use a lemma, Lemma 1, that is a theorem in dynamic
deontic logic. α and β are interpreted as above.
Lemma 1. O(α & β) ↔ (Oα ∧ Oβ) [Theorem 19 in Meyer (1988)]
1. Oα N1
2. [α]O‐β N2
3. [-α]Oβ N3
4. DONE(-α) N4
5. [-α]Oβ : (DONE(-α) → Oβ) Property of DONE
6. DONE(-α) → Oβ 3, 5
7. Oβ 4, 6
8. Oα ∧ Oβ 1, 7
9. O(α & β) ↔ (Oα ∧ Oβ) Instance of Lemma 1
10. O(α & β) 8, 9
But the conclusion 10 in this argument says that it is obligatory that you perform the act of keeping your promise and the act of apologising (for not keeping your promise), and this is
Argument 2. Recall that the first three sentences in N-CTD are symbolized in the following way: DDLF1 Oα, DDLF2 [α]O‐β, and DDLF3 [-α]Oβ. We will show that we can derive DDLF3 from DDLF1. It follows
that the formalisation of N-CTD in dynamic deontic logic is not non-redundant. This is our second argument. The rules that are used in the proofs below are mentioned by Meyer (1988).
Lemma 2 Fα → F(α & β) [Theorem 16 in Meyer (1988)]
Lemma 3 α ; β = α & -(α ; ‐β)
1. α & -(α ; ‐β) = ‐ ‐α & -(α ; ‐β) [Act‐ ‐]
2. ‐ ‐α & -(α ; ‐β) = -(-α ∪ (α ; ‐β)) [Act-∪]
3. -(-α ∪ (α ; ‐β)) = ‐ ‐(α ; β) [Act-;]
4. ‐ ‐(α ; β) = α ; β [Act‐ ‐]
5. α & -(α ; ‐β) = α ; β [1–4]
Lemma 4 Fα → F(α ; β)
1. Fα → F(α & β) Lemma 2
2. Fα → F(α & -(α ; ‐β)) -(α ; ‐β)/β
3. Fα → F(α ; β) 2, Lemma 3
Lemma 5 Fα → [α]Fβ
1. Fα → F(α; β) Lemma 4
2. [α]V → [α; β]V 1, Def F
3. [α]V → [α][β]V 2, (;)
4. Fα → [α]Fβ 3, Def F
Oα is equivalent to F‐α and [‐α]Oβ to [‐α]F‐β. F‐α → [‐α]F‐β is an instance of Lemma 5. So, DDLF3 in DDLF-CTD is derivable from DDLF1. Consequently, DDLF-CTD is not non-redundant.
Argument 3. Here is our third argument. This argument shows that if we add Axiom DD (¬O(α & ‐α)) to Meyer’s dynamic deontic logic PDeL, we can derive a sentence that, in effect, says that no
possible action is forbidden. Axiom DD seems to be intuitively plausible, as it is a dynamic counterpart of the axiom D in Standard Deontic Logic that rules out moral dilemmas. Hence, this problem is
quite serious. In the proof below, T is Verum and ⊥ is Falsum. T is equivalent to an arbitrary logical truth (for example, p or not-p) and ⊥ is equivalent to an arbitrary contradiction (for example,
p and not-p). Obviously, T is equivalent to ¬⊥ and ⊥ is equivalent to ¬T. (Let us call these equivalences Def T and Def ⊥.) Furthermore, <α>β is equivalent to ¬[α]¬β (let us call this equivalence
Def <>). So, <α> is similar to an ordinary possibility-like modal operator (the so-called diamond). []-nec (or N) is a fundamental rule in Meyer’s system. It says that if B is a theorem (in the
system), then [α]B is also a theorem (in the system).
Axiom DD ¬O(α & ‐α) [DD is called NCO in Meyer (1988)]
Lemma 6 [α](A ∧ B) ↔ ([α]A ∧ [α]B) [Theorem 3 in Meyer (1988)]
1. Fα → [α]F‐β Lemma 5 ‐β/β
2. Fα → [α]F‐ ‐β Lemma 5 ‐ ‐β/β
3. Fα → [α]Oβ 1, Def O
4. Fα → [α]O‐β 2, Def O
5. Fα → ([α]Oβ ∧ [α]O‐β) 3, 4
6. [α](Oβ ∧ O‐β) ↔ ([α]Oβ ∧ [α]O‐β) Lemma 6 Oβ/A, O‐β/B
7. Fα → [α](Oβ ∧ O‐β) 5, 6, Replacement
8. O(β & ‐β) ↔ (Oβ ∧ O‐β) Lemma 1 β/α, ‐β/β
9. Fα → [α]O(β & ‐β) 7, 8
10. ¬O(β & ‐β) Axiom DD β/α
11. [α]¬O(β & ‐β) 10, []‐nec
12. Fα → ([α]O(β & ‐β) ∧ [α]¬O(β & ‐β)) 9, 11
13. [α](O(β & ‐β) ∧ ¬O(β & ‐β))↔([α]O(β & ‐β) ∧ [α]¬O(β & ‐β)) Lemma 6 O(β & ‐β)/A, ¬O(β & ‐β)/B
14. Fα → [α](O(β & ‐β) ∧ ¬O(β & ‐β)) 12, 13
15. Fα → [α]⊥ 14, Def ⊥
16. (Fα ∧ <α>T) → ([α]⊥ ∧ <α>T) 15
17. <α>T ↔ ¬[α]⊥ Def <>, Def T, ⊥
18. (Fα ∧ <α>T) → ([α]⊥ ∧ ¬[α]⊥) 16, 17
19. ¬(Fα ∧ <α>T) 18
In effect, 19 claims that no possible action is forbidden. As Anglberger points out, Fα → [α]⊥ (line 15) seems implausible, but it can be true. If α is an impossible action, the consequent—and hence
the whole sentence—is true. Nonetheless, if α is possible, α cannot be forbidden. <α>T says that α is possible, in the sense that there is a way to execute α that leads to a state in which T holds.
Clearly 19 is implausible. Clearly, we want to be able to say that at least some possible action is forbidden. So, adding the intuitively plausible axiom DD to Meyer’s dynamic deontic logic PDeL is
highly problematic.
Argument 4. The last argument against the dynamic solution to the contrary-to-duty paradox that we discuss seems to be a problem for most action or agent solutions. At least it is a problem for both
the dynamic solution and the solution that uses some kind of Stit logic. Several examples of such (apparently) action- and/or agentless contrary-to-duty paradoxes have been mentioned in the
literature, such as in Prakken and Sergot (1996). Here we consider one introduced by Rönnedal (2018).
Scenario II: Contrary-to-duty paradoxes involving (apparently) action- and/or agentless contrary-to-duty obligations (Rönnedal (2018))
Consider the following scenario. At t1, you are about to get into your car and drive somewhere. Then at t1 it ought to be the case that the doors are closed at t2, when you are in your car. If the
doors are not closed, then a warning light ought to appear on the car instrument panel (at t3, a point in time as soon as possible after t2). It ought to be that if the doors are closed (at t2), then
it is not the case that a warning light appears on the car instrument panel (at t3). Furthermore, the doors are not closed (at t2 when you are in the car). In this example, all of the following
sentences seem to be true:
AN1 (At t1) The doors ought to be closed (at t2).
AN2 (At t1) It ought to be that if the doors are closed (at t2), then it is not the case that a warning light appears on the car instrument panel (at t3).
AN3 (At t1) If the doors are not closed (at t2) then a warning light ought to appear on the car instrument panel (at t3).
AN4 (At t1 it is the case that at t2) The doors are not closed.
N2-CTD is similar to N-CTD. In this set, AN1 expresses a primary obligation (or ought), and AN3 expresses a contrary-to-duty obligation. The condition in AN3 is satisfied only if the primary
obligation expressed by AN1 is violated. But AN3 does not seem to tell us anything about what you or someone else ought to do, and it does not seem to involve any particular agent. AN3 appears to be
an action- and agentless contrary-to-duty obligation. It tells us something about what ought to be the case if the world is not as it ought to be according to AN1. It does not seem to be possible to
find any plausible symbolisations of N2-CTD and similar paradoxes in dynamic deontic logic or any Stit logic.
Can someone who defends this kind of solution avoid this problem? Two strategies come to mind. One could argue that every kind of apparently action- and agentless contrary-to-duty paradox really
involves some kind of action and agent when it is analysed properly. One could, for instance, claim that N2-CTD really includes an implicit agent. It is just that the agent is not a human being; the
agent is the car or the warning system in the car. When analysed in detail, AN3 should be understood in the following way:
AN3(b) (At t1) If the doors are not closed (at t2) then the car or the warning system in the car ought to see to it that a warning light appears on the car instrument panel (at t3).
According to this response, one can always find some implicit agent and action in every apparently action- and/or agentless contrary-to-duty paradox. If this is the case, the problem might not be
decisive for this kind of solution.
According to the second strategy, we simply deny that genuinely action- and/or agentless obligations are meaningful. If, for example, the sentences in N2-CTD are genuinely actionless and agentless,
then they are meaningless and we cannot derive a contradiction from them. Hence, the paradox is solved. If, however, we can show that they involve some kind of actions and some kind of agent or
agents, we can use the first strategy to solve them.
Whether any of these strategies is successful is, of course, debatable. There certainly seems to be genuinely action- and agentless obligations that are meaningful, and it seems prima facie unlikely
that every apparently action- and agentless obligation can be reduced to an obligation that involves an action and an agent. Is it, for example, really plausible to think of the car or the warning
system in the car as an acting agent that can have obligations? Does AN3 [(At t1) If the doors are not closed (at t2) then a warning light ought to appear on the car instrument panel (at t3)] say the
same thing as AN3(b) [(At t1) If the doors are not closed (at t2) then the car or the warning system in the car ought to see to it that a warning light appears on the car instrument panel (at t3)]?
e. Temporal Solutions
In this section, we consider some temporal solutions to the contrary-to-duty paradox. The temporal approaches can be divided into three subcategories: (1) the pure temporal solution(s), (2) the
temporal-action solution(s), and (3) the temporal dyadic deontic solution(s). All of these combine some kind of temporal logic with some kind of deontic logic. According to the temporal-action
solutions, we should also add some kind of action logic to the other parts. Some of the first to construct systems that include both deontic and temporal elements were Montague (1968) and Chellas
According to the pure temporal solutions, we should use systems that combine ordinary so-called monadic deontic logic with some kind of temporal logic (perhaps together with a modal part) when we
symbolise our contrary-to-duty obligations. See Rönnedal (2012, pp. 106–112) for more on some pure temporal solutions and on some problems with such approaches.
The idea of combining temporal logic, deontic logic and some kind of action logic has gained traction. A particularly interesting development is the so-called Stit (Seeing to it) paradigm. According
to this paradigm, it is important to make a distinction between agentive and non-agentive sentences. A (deontic) Stit system is a system that includes one or several Stit (Seeing to it) operators
that can be used to formalise various agentive sentences. The formula ‘[α: stit A]’ (‘[α: dstit A]’), for instance, says ‘agent α sees to it that A’ (‘agent α deliberately sees to it that A’). [α:
(d)stit A] can be abbreviated as [α: A]. Some have argued that systems of this kind can be used to solve the contrary-to-duty paradox; see, for instance, Bartha (1993). According to the Stit
approach, deontic constructions must take agentive sentences as complements; in a sentence of the form OA, A must be (or be equivalent to) a Stit sentence. A justification for this claim is,
according to Bartha, that practical obligations, ‘ought to do’s’, should be connected to a specific action by a specific agent. The construction ‘agent α is obligated to see to it that A’ can now be
defined in the following way: O[α: A] ⟺ L(¬[α: A] → S), where L says that ‘It is settled that’ and S says that ‘there is wrongdoing’ or ‘there is violation of the rules’ or something to that effect.
Hence, α is obligated to see to it that A if and only if it is settled that if she does not see to it that A, then there is wrongdoing. In a logic of this kind, N-CTD can be symbolised in the
following way: {O[α: k], O[α: [α: k] → [α:¬a]], O[α:¬[α: k] → [α: [α: a]]], ¬[α: k]}. And this set is consistent in Bartha’s system. For more on Stit logic and many relevant references, see Horty
(2001), and Belnap, Perloff and Xu (2001).
An example: The temporal dyadic deontic solution
Here we consider, as an example of a temporal solution, the temporal dyadic deontic solution. We should perhaps not talk about ‘the’ temporal dyadic deontic solution, since there really are several
different versions of this kind of view. However, let us focus on an example presented in Rönnedal (2018). What is common to all approaches of this kind is that they use some logical system that
combines dyadic deontic logic with temporal logic to solve the contrary-to-duty paradox. Usually, the various systems also include a modal part with one or several necessity- and
possibility-operators. Solutions of this kind are discussed by, for example, Åqvist (2003), van Eck (1982), Loewer and Belzer (1983), and Feldman (1986, 1990) (see also Åqvist and Hoepelman (1981)
and Thomason (1981, 1981b)). Castañeda (1977) and Prakken and Sergot (1996) express some doubts about this kind of approach.
We first describe how the contrary-to-duty paradox can be solved in temporal alethic dyadic deontic logic of the kind introduced by Rönnedal (2018). Then, we consider some reasons why this solution
is attractive. We end by mentioning a potential problem with this solution. In temporal alethic dyadic deontic logic, N-CTD can be symbolised in the following way:
F1. Rt1O[T]Rt2k
F2. Rt1O[Rt2k]Rt3¬a
F3. Rt1O[Rt2¬k]Rt3a
F4. Rt1Rt2¬k [⇔Rt2¬k]
where k and a are interpreted as in SDL-CTD. R is a temporal operator; ‘Rt1A’ says that it is realised at time t1 (it is true on t1) that A, and so forth. t1 refers to the moment on Monday when you
make your promise, t2 refers to the moment on Friday when you should keep your promise and t3 refers to the moment on Saturday when you should apologise if you do not keep your promise on Friday. O
is a dyadic deontic sentential operator of the kind mentioned in Section 2c. ‘O[B]A’ says that it is obligatory that (it ought to be the case that) A given B. In dyadic deontic logic, an
unconditional, monadic O-operator can be defined in terms of the dyadic deontic O-operator in the following way: OA =df O[T]A. According to this definition, it is unconditionally obligatory that A if
and only if it is obligatory that A given Verum. All other symbols are interpreted as above. Accordingly, F1 is read as ‘It is true on Monday that you ought to keep your promise on Friday’. F2 is
read as ‘It is true on Monday that it ought to be the case that you do not apologise on Saturday given that you keep your promise on Friday’. F3 is read as ‘It is true on Monday that it ought to be
the case that you apologise on Saturday given that you do not keep your promise on Friday’. F4 is read as ‘It is true on Monday that it is true on Friday that you do not keep your promise’; in other
words, ‘It is true on Friday that you do not keep your promise’. This rendering of N-CTD seems to be plausible.
In temporal (alethic) dyadic deontic logic, truth is relativized to world-moment pairs. This means that a sentence can be true in one possible world w at a particular time t even though it is false
in some other possible world, say w’, at this time (that is, at t) or false in this world (that is, in w) at another time, say t’. Some (but not all) sentences are temporally settled. A temporally
settled sentence satisfies the following condition: if it is true (in a possible world), it is true at every moment of time (in this possible world); and if it is false (in a possible world), it is
false at every moment of time (in this possible world). All the sentences F1−F4 are temporally settled; O[T]Rt2k, O[Rt2k]Rt3¬a and O[Rt2¬k]Rt3a are examples of sentences that are not, as their
truth values may vary from one moment of time to another (in one and the same possible world).
Rt1Rt2¬k is equivalent to Rt2¬k. For it is true on Monday that it is true on Friday that you do not keep your promise if and only if it is true on Friday that you do not keep your promise. Hence,
from now on we use Rt2¬k as a symbolisation of N4. Note that it might be true on Monday that you will not keep your promise on Friday (in some possible world) even though this is not a settled
fact—in other words, even though it is not historically necessary. In some possible worlds, you will keep your promise on Friday and in some possible worlds you will not. F4 is true at t1 (on Monday)
in the possible worlds where you do not keep your promise at t2 (on Friday).
Let F-CTD = {F1, F2, F3, F4}. F-CTD is consistent in most interesting temporal alethic dyadic deontic systems (see Rönnedal (2018) for a rigorous proof of this claim). Hence, we can solve the
contrary-to-duty paradox in temporal alethic dyadic deontic logic.
Arguments for the temporal alethic dyadic deontic solution
We now consider some reasons why the temporal alethic dyadic deontic solution to the contrary-to-duty paradox is attractive. We first briefly mention some features; then, we discuss some reasons in
more detail. (1) F-CTD is consistent. (2) F-CTD is non-redundant. (3) F-CTD is dilemma-free. (4) It is not possible to derive the proposition that everything is both obligatory and forbidden from
F-CTD. (5) F-CTD avoids the so-called pragmatic oddity. (6) The solution in temporal alethic dyadic deontic logic is applicable to (at least apparently) action- and agentless contrary-to-duty
examples. (7) We can assign formal sentences with analogous structures to all conditional obligations in N-CTD in temporal alethic dyadic deontic logic. (8) We can express the idea that an obligation
has been violated, and (9) we can symbolise higher order contrary-to-duty obligations in temporal alethic dyadic deontic logic. (10) In temporal alethic dyadic deontic logic we can derive ‘ideal’
obligations, and (11) we can derive ‘actual’ obligations (in certain circumstances). (12) We can avoid the so-called dilemma of commitment and detachment in temporal alethic dyadic deontic logic. All
of these reasons are discussed in Rönnedal (2018). Now let us say a few more words about some of them.
Reason (I): F-CTD is dilemma-free. The solution in temporal alethic dyadic deontic logic is dilemma-free. The sentence Rt1O[T]Rt3¬a is derivable from F1 and F2 (in some systems) (see Reason V below)
and from F3b and F4 we can deduce the formula Rt2O[T]Rt3a (in some systems under some circumstances) (see Reason VI below). Accordingly, we can derive the following sentence: Rt1O[T]Rt3¬a ∧ Rt2O[T]
Rt3a (in certain systems). Rt1O[T]Rt3¬a says ‘On Monday [when you have not yet broken your promise] it ought to be the case that you do not apologise on Saturday’, and Rt2O[T]Rt3a says ‘On Friday
[when you have broken your promise] it ought to be the case that you apologise on Saturday’. Despite this, O[T]Rt3a and O[T]Rt3¬a are not true at the same time. Neither Rt1O[T]Rt3¬a ∧ Rt1O[T]Rt3a
nor Rt2O[T]Rt3¬a ∧ Rt2O[T]Rt3a is derivable from F-CTD in any interesting temporal alethic dyadic deontic system. Consequently, this is not a moral dilemma. Since N-CTD seems to be dilemma-free, we
want our formalisation of N-CTD to be dilemma-free too; and F-CTD is, as we have seen, dilemma-free. This is one good reason to be attracted to the temporal alethic dyadic deontic solution.
Reason (II): F-CTD avoids the so-called pragmatic oddity. Neither O[T](Rt2k ∧ Rt3a), Rt1O[T](Rt2k ∧ Rt3a) nor Rt2O[T](Rt2k ∧ Rt3a) is derivable from F-CTD in any interesting temporal alethic dyadic
deontic system. Hence, we can avoid the pragmatic oddity (see Section 2a above).
Reason (III): The solution in temporal alethic dyadic deontic logic is applicable to (at least apparently) actionless and agentless contrary-to-duty examples. In Section 2d, we considered an example
of an (apparently) action- and agentless contrary-to-duty paradox. In temporal alethic dyadic deontic logic, it is easy to find plausible symbolisations of (apparently) action- and agentless
contrary-to-duty obligations; the sentences in N2-CTD have the same logical form as the sentences in N-CTD. It follows that contrary-to-duty paradoxes of this kind can be solved in exactly the same
way as we solved our original paradox.
Reason (IV): We can assign formal sentences with analogous structures to all conditional obligations in N-CTD in temporal alethic dyadic deontic logic. According to some deontic logicians, a
formalisation of N-CTD is adequate only if the formal sentences assigned to N2 and N3 have the same (or analogous) logical form (see Carmo and Jones (2002)). The temporal alethic dyadic deontic
solution satisfies this requirement. Not all solutions do that. F2 and F3 have the ‘same’ logical form and they can both be formalised using dyadic obligation.
Reason (V): We can derive ‘ideal’ obligations in temporal alethic dyadic deontic logic. N1 and N2 seem to entail that you ought not to apologise. Ideally you ought to keep your promise, and ideally
it ought to be that if you keep your promise, then you do not apologise (for not keeping your promise). Accordingly, ideally you ought not to apologise. We want our formalisation of N-CTD to reflect
this intuition. Rt1O[T]Rt3¬a is deducible from F1 (Rt1O[T]Rt2k) and F2 (Rt1O[Rt2k]Rt3¬a) in many temporal dyadic deontic systems. The tableau below proves this.
We use two derived rules in our deduction. These are also used in our next semantic tableau (see Reason VI below). According to the first derived rule, DR1, we may add ¬A, wit to any open branch in
a tree that includes ¬RtA, witj. This rule is deducible in every system. According to the second derived rule, DR2, we may add O[T](A → B), witj to any open branch in a tree that contains O[A]B,
witj. DR2 can be derived in every system that includes the rules T − Dα0 and T − Dα2. (All other special rules that we use in our deductions are described by Rönnedal (2018).)
(1) Rt1O[T]Rt2k, w0t0
(2) Rt1O[Rt2k]Rt3¬a, w0t0
(3) ¬Rt1O[T]Rt3¬a, w0t0
(4) ¬O[T]Rt3¬a, w0t1 [3, DR1]
(5) P[T]¬Rt3¬a, w0t1 [4, ¬O]
(6) sTw0w1t1 [5, P]
(7) ¬Rt3¬a, w1t1 [5, P]
(8) ¬¬a, w1t3 [7, DR1]
(9) O[T]Rt2k, w0t1 [1, Rt]
(10) Rt2k, w1t1 [9, 6, O]
(11) k, w1t2 [10, Rt]
(12) O[Rt2k]Rt3¬a, w0t1 [2, Rt]
(13) O[T](Rt2k → Rt3¬a), w0t1 [12, DR2]
(14) Rt2k → Rt3¬a, w1t1 [13, 6, O]
(15) ¬Rt2k, w1t1 [14, →] (16) Rt3¬a, w1t1 [14, →]
(17) ¬k, w1t2 [15, DR1] (18) ¬a, w1t3 [16, Rt]
(19) * [11, 17] (20) * [8, 18]
Informally, Rt1O[T]Rt3¬a says that it is true at t1, that is, on Monday, that it ought to be the case that you will not apologise on Saturday when you meet your friend. For, ideally, you keep your
promise on Friday. Yet, Rt2O[T]Rt3¬a does not follow from F1 and F2 (see Reason I above). On Friday, when you have broken your promise, and when it is no longer historically possible for you to keep
your promise, then it is not obligatory that you do not apologise on Saturday. On Friday, it is obligatory that you apologise when you meet your friend on Saturday (see Reason VI). Nevertheless, it
is plausible to claim that it is true on Monday that it ought to be the case that you do not apologise on Saturday. For on Monday it is not a settled fact that you will not keep your promise; on
Monday, it is still possible for you to keep your promise, which you ought to do. These conclusions correspond well with our intuitions about Scenario I.
According to the counterfactual solution (see Section 2c) to the contrary-to-duty paradoxes, we cannot derive any ‘ideal’ obligations of this kind. This is a potential problem for this solution.
Reason (VI): We can derive ‘actual’ obligations in temporal alethic dyadic deontic logic (in certain circumstances). N3 and N4 appear to entail that you ought to apologise. Ideally you ought to keep
your promise, but if you do not keep your promise, you ought to apologise. As a matter of fact, you do not keep your promise. It follows that you should apologise. We want our symbolisation of N-CTD
to reflect this intuition. Therefore, let us assume that the conditional (contrary-to-duty) obligation expressed by N3 is still in force at time t2; in other words, we assume that the following
sentence is true:
F3b Rt2O[Rt2¬k]Rt3a.
Informally, F3b says that it is true at t2 (on Friday) that if you do not keep your promise on Friday, you ought to apologise on Saturday. Rt2O[T]Rt3a is derivable from F4 (Rt2¬k) and F3b in every
tableau system that includes T−Dα0, T−Dα2, T−DMO (the dyadic must-ought principle) and T−BT (backward transfer) (see Rönnedal (2018)). According to Rt2O[T]Rt3α, it is true at t2 (on Friday), when you
have broken your promise to your friend, that it ought to be the case that you apologise to your friend on Saturday when you meet her.
Note that Rt1O[T]Rt3a is not deducible from F3 (or F3b or F3 and F3b) and F4 (see Reason I). According to Rt1O[T]Rt3a, it is true at t1, on Monday, that you should apologise to you friend on Saturday
when you meet her. However, on Monday it is not yet a settled fact that you will not keep your promise to your friend; on Monday it is still open to you to keep your promise. Accordingly, it is not
true on Monday that you should apologise on Saturday. Since it is true on Monday that you ought to keep your promise, and it ought to be that if you keep your promise then you do not apologise, it
follows that it is true on Monday that it ought to be the case that you do not apologise on Saturday (see Reason V). These facts correspond well with our intuitions about Scenario I.
The following tableau proves that Rt2O[T]Rt3a is derivable from F3b and F4:
(1) Rt2¬k, w0t0
(2) Rt2O[Rt2¬k]Rt3a, w0t0
(3) ¬Rt2O[T]Rt3a, w0t0
(4) ¬O[T]Rt3a, w0t2 [3, DR1]
(5) P[T]¬Rt3a, w0t2 [4, ¬O]
(6) sTw0w1t2 [5, P]
(7) ¬Rt3a, w1t2 [5, P]
(8) ¬a, w1t3 [7, DR1]
(9) rw0w1t2 [6, T − DMO]
(10) ¬k, w0t2 [1, Rt]
(11) O[Rt2¬k]Rt3a, w0t2 [2, Rt]
(12) O[T](Rt2¬k → Rt3a), w0t2 [11, DR2]
(13) Rt2¬k → Rt3a, w1t2 [6, 12, O]
(14) ¬Rt2¬k, w1t2 [13, →] (15) Rt3a, w1t2 [13, →]
(16) ¬¬k, w1t2 [14, DR1] (17) a, w1t3 [15, Rt]
(18) k, w1t2 [16, ¬¬] (19) * [8, 17]
(20) k, w0t2 [9, 18, T − BT]
(21) * [10, 20]
F3 and F3b are independent of each other (in most interesting temporal alethic dyadic deontic systems). Hence, one could argue that N3 should be symbolised by a conjunction of F3 and F3b. For we have
assumed that the contrary-to-duty obligation to apologise, given that you do not keep your promise, is still in force at t2. It might be interesting to note that this does not affect the main results
in this section. {F1, F2, F3, F3b, F4} is, for example, consistent, non-redundant, and so on. So, we can use such an alternative formalisation of N3 instead of F3. Moreover, note that the
symbolisation of N2 can be modified in a similar way.
Reason (VII): In temporal alethic dyadic deontic logic we can avoid the so-called dilemma of commitment and detachment. (Factual) Detachment is an inference pattern that allows us to infer or detach
an unconditional obligation from a conditional obligation and this conditional obligation’s condition. Thus, if detachment holds for the conditional (contrary-to-duty) obligation that you should
apologise if you do not keep your promise (if detachment is possible), and if you in fact do not keep your promise, then we can derive the unconditional obligation that you should apologise.
van Eck (1982, p. 263) describes the so-called dilemma of commitment and detachment in the following way: (1) detachment should be possible, for we cannot take seriously a conditional obligation if
it cannot, by way of detachment, lead to an unconditional obligation; and (2) detachment should not be possible, for if detachment is possible, the following kind of situation would be
inconsistent—A, it ought to be the case that B given that A; and C, it ought to be the case that not-B given C. Yet, such a situation is not necessarily inconsistent.
In pure dyadic deontic logic, we cannot deduce the unconditional obligation that it is obligatory that A (OA) from the dyadic obligation that it is obligatory that A given B (O[B]A) and B. Still, if
this is true, how can we take such conditional obligations seriously? Hence, the dilemma of commitment and detachment is a problem for solutions to the contrary-to-duty paradox in pure dyadic deontic
logic. In temporal alethic dyadic deontic logic, we can avoid this dilemma. We cannot always detach an unconditional obligation from a conditional obligation and its condition, but we can detach the
unconditional obligation OB from O[A]B and A if A is non-future or historically necessary (in some interesting temporal alethic dyadic deontic systems). This seems to give us exactly the correct
answer to the current problem. Detachment holds, but the rule does not hold unrestrictedly. We have seen above that Rt2O[T]Rt3a, but not Rt1O[T]Rt3a, is derivable from Rt2¬k and Rt2O[Rt2¬k]Rt3a in
certain systems, that is, that we can detach the former sentence, but not the latter. Nevertheless, we cannot conclude that a set of the following kind must be inconsistent: {A, O[A]B, C, O[C]¬B};
this seems to get us exactly what we want.
All of these reasons show that the temporal dyadic deontic solution is very attractive. It avoids many of the problems with other solutions that have been suggested in the literature. However, even
though the solution is quite attractive, it is not unproblematic. We will now consider one potential serious problem.
An argument against the temporal solutions
The following argument against the temporal dyadic deontic solution appears to be a problem for every other kind of temporal solution too. There seems to be timeless (or parallel) contrary-to-duty
paradoxes. In a timeless (or parallel) contrary-to-duty paradox, all obligations seem, in some sense, to be in force simultaneously, and both the antecedent and consequent in the contrary-to-duty
obligation appear to ‘refer’ to the same time (if indeed they refer to any time at all). Such paradoxes cannot be solved in temporal dyadic deontic logic or any other system of this kind. For a
critique of temporal solutions to the contrary-to-duty paradoxes, see Castañeda (1977). Several (apparently) timeless (or parallel) contrary-to-duty paradoxes are mentioned by Prakken and Sergot
Here is one example.
Scenario III: The Dog Warning Sign Scenario (After Prakken and Sergot (1996))
Consider the following set of cottage regulations. It ought to be that there is no dog. It ought to be that if there is no dog, there is no warning sign. If there is a dog, it ought to be that there
is a warning sign. Suppose further that there is a dog. Then all of the following sentences seem to be true:
(TN1) It ought to be that there is no dog.
(TN2) It ought to be that if there is no dog, there is no warning sign.
(TN3) If there is a dog, it ought to be that there is a warning sign.
(TN4) There is a dog.
(TN1) expresses a primary obligation and (TN3) a contrary-to-duty obligation. The condition in (TN3) is fulfilled only if the primary obligation expressed by (TN1) is violated. Let TN-CTD = {TN1,
TN2, TN3, TN4}. It seems possible that all of the sentences in TN-CTD could be true; the set does not seem to be inconsistent. Yet, if this is the case, TN-CTD poses a problem for all temporal
In this example, all obligations appear to be timeless or parallel; they appear to be in force simultaneously, and the antecedent and consequent in the contrary-to-duty obligation (TN3) seem to refer
to one and the same time (or perhaps to no particular time at all). So, a natural symbolisation is the following:
(FTN1) O[T]¬d
(FTN2) O[¬d]¬w
(FTN3) O[d]w
(FTN4) d
where d stands for ‘There is a dog’ and w for ‘There is a warning sign’ and all other symbols are interpreted as above. Nevertheless, this set is inconsistent in many temporal alethic dyadic deontic
systems. We prove this below. But first let us consider some derived rules that we use in our tableau derivation.
Derived rules
DR3 O[A]B => O[T](A→B)
DR4 O[A]B, O[A](B→C) => O[A]C
DR5 O[T](A→B), A => O[T]B, given that A is non-future.
According to DR3, if we have O[A]B, witj on an open branch in a tree we may add O[T](A→B), witj to this branch in this tree. The other derived rules are interpreted in a similar way. A is non-future
as long as A does not include any operator that refers to the future.
We are now in a position to prove that the set of sentences FTN-CTD = {FTN1, FTN2, FTN3, FTN4} is inconsistent in every temporal dyadic deontic tableau system that includes the rules T–DMO, T–Dα0 –
T–Dα4, T–FT, and T–BT (Rönnedal (2018)). Here is the tableau derivation:
(1) O[T]¬d, w0t0
(2) O[¬d]¬w, w0t0
(3) O[d]w, w0t0
(4) d, w0t0
(5) O[T](¬d → ¬w), w0t0 [2, DR3]
(6) O[T](d → w), w0t0 [3, DR3]
(7) O[T]¬w, w0t0 [1, 5, DR4]
(8) O[T]w, w0t0 [4, 6, DR5]
(9) T, w0t0 [Global Assumption]
(10) STw0w1t0 [9, T–Dα3]
(11) ¬w, w1t0 [7, 10, O]
(12) w, w1t0 [8, 10, O]
(13) * [11, 12]
This is counterintuitive, since TN-CTD seems to be consistent. This is an example of a timeless (parallel) contrary-to-duty paradox.
Can we avoid this problem by introducing some temporal operators in our symbolisation of TN-CTD? One natural interpretation of the sentences in this set is as follows: (TN1) (At t1) It ought to be
that there is no dog; (TN2) (At t1) It ought to be that if there is no dog (at t1), there is no warning sign (at t1); (TN3) (At t1) If there is a dog, then (at t1) it ought to be that there is a
warning sign (at t1); and (TN4) (At t1) There is a dog.
Hence, an alternative symbolisation of the sentence in (TN-CTD) is the following:
(F2TN1) Rt1O[T]Rt1¬d
(F2TN2) Rt1O[Rt1¬d]Rt1¬w
(F2TN3) Rt1O[Rt1d]Rt1w
(F2TN4) Rt1d
Yet, the set F2TN-CTD = {F2TN1, F2TN2, F2TN3, F2TN4} is also inconsistent. The proof is similar to the one above. So, this move does not help. And it does not seem to be the case that we can find any
other plausible symbolisation of TN-CTD in temporal alethic dyadic deontic logic that is consistent. (TN2) cannot, for instance, plausibly be interpreted in the following way: (At t1) It ought to be
that if there is no dog (at t2), there is no warning sign (at t3), where t1 is before t2 and t2 before t3. And (TN3) cannot plausibly be interpreted in the following way: (At t1) If there is a dog,
then (at t2) it ought to be that there is a warning sign (at t3), where t1 is before t2 and t2 before t3.
Hence, (apparently) timeless contrary-to-duty paradoxes pose a real problem for the temporal dyadic deontic solution and other similar temporal solutions.
3. References and Further Reading
Anglberger, A. J. J. (2008). Dynamic Deontic Logic and Its Paradoxes. Studia Logica, Vol. 89, No. 3, pp. 427–435.
Åqvist, L. (1967). Good Samaritans, Contrary-to-duty Imperatives, and Epistemic Obligations. Noûs 1, pp. 361–379.
Åqvist, L. (1984). Deontic Logic. In D. Gabbay and F. Guenthner (eds.) Handbook of Philosophical Logic, Vol. II, D. Reidel, pp. 605–714.
Åqvist, L. (1987). Introduction to Deontic Logic and the Theory of Normative Systems. Naples, Bibliopolis.
Åqvist, L. (2002). Deontic Logic. In Gabbay and Guenthner (eds.) Handbook of Philosophical Logic, 2nd Edition, Vol. 8, Dordrecht/Boston/London: Kluwer Academic Publishers, pp. 147–264.
Åqvist, L. (2003). Conditionality and Branching Time in Deontic Logic: Further Remarks on the Alchourrón and Bulygin (1983) Example. In Segerberg and Sliwinski (eds.) (2003) Logic, law, morality:
thirteen essays in practical philosophy in honour of Lennart Åqvist, Uppsala philosophical studies 51, Uppsala: Uppsala University, pp. 13–37.
Åqvist, L. and Hoepelman, J. (1981). Some theorems about a ‘tree’ system of deontic tense logic. In R. Hilpinen (ed.) New Studies in Deontic Logic, D. Reidel, Dordrecht, pp. 187–221.
Bartha, P. (1993). Conditional obligation, deontic paradoxes, and the logic of agency. Annals of Mathematics and Artificial Intelligence 9, (1993), pp. 1–23.
Belnap, N., Perloff, M. and Xu, M. (2001). Facing the Future: Agents and Choices in Our Indeterminist World. Oxford: Oxford University Press.
Bonevac, D. (1998). Against Conditional Obligation. Noûs, Vol 32 (Mars), pp. 37–53.
Carmo, J. and Jones, A. J. I. (2002). Deontic Logic and Contrary-to-duties. In Gabbay and Guenthner (eds.) (2002) Handbook of Philosophical Logic, vol 8, pp. 265–343.
Castañeda, H. -N. (1977). Ought, Time, and the Deontic Paradoxes. The Journal of Philosophy, Vol. 74, No. 12, pp. 775–791.
Castañeda, H. -N. (1981). The paradoxes of deontic logic: the simplest solution to all of them in one fell swoop. In R. Hilpinen (ed.) New Studies in Deontic Logic, D. Reidel, Dordrecht, pp. 37–85.
Chellas, B. F. (1969). The Logical Form of Imperatives. Stanford: Perry Lane Press.
Chellas, B. F. (1980). Modal Logic: An Introduction. Cambridge: Cambridge University Press.
Chisholm, R. M. (1963). Contrary-to-duty Imperatives and Deontic Logic. Analysis 24, pp. 33–36.
Cox, Azizah Al-Hibri. (1978). Deontic Logic: A Comprehensive Appraisal and a New Proposal. University Press of America.
Danielsson, S. (1968). Preference and Obligation: Studies in the Logic of Ethics. Filosofiska föreningen, Uppsala.
Decew, J. W. (1981). Conditional Obligations and Counterfactuals. The Journal of Philosophical Logic 10, pp. 55–72.
Feldman, F. (1986). Doing The Best We Can: An Essay in Informal Deontic Logic. Dordrecht: D. Reidel Publishing Company.
Feldman, F. (1990). A Simpler Solution to the Paradoxes of Deontic Logic. Philosophical Perspectives, vol. 4, pp. 309–341.
Fisher, M. (1964). A contradiction in deontic logic?, Analysis, XXV, pp. 12–13.
Forrester, J. W. (1984). Gentle Murder, or the Adverbial Samaritan. Journal of Philosophy, Vol. LXXI, No. 4, pp. 193–197.
Gabbay, D., Horty, J., Parent, X., van der Meyden, E. & van der Torre, L. (eds.). (2013). Handbook of Deontic Logic and Normative Systems. College Publications.
Greenspan. P. S. (1975). Conditional Oughts and Hypothetical Imperatives. The Journal of Philosophy, Vol. 72, No. 10 (May 22), pp. 259–276.
Hansson, B. (1969). An Analysis of Some Deontic Logics. Noûs 3, 373-398. Reprinted in Hilpinen, Risto (ed). 1971. Deontic Logic: Introductory and Systematic Readings. Dordrecht: D. Reidel Publishing
Company, pp. 121–147.
Hilpinen, R. (ed). (1971). Deontic Logic: Introductory and Systematic Readings. Dordrecht: D. Reidel Publishing Company.
Hilpinen, R. (ed). (1981). New Studies in Deontic Logic Norms, Actions, and the Foundation of Ethics. Dordrecht: D. Reidel Publishing Company.
Horty, J. F. (2001). Agency and Deontic Logic. Oxford: Oxford University Press.
Jones, A. and Pörn, I. (1985). Ideality, sub-ideality and deontic logic. Synthese 65, pp. 275–290.
Lewis, D. (1974). Semantic analysis for dyadic deontic logic. In S. Stenlund, editor, Logical Theory and Semantical Analysis, pp. 1–14. D. Reidel Publishing Company, Dordrecht, Holland.
Loewer, B. and Belzer, M. (1983). Dyadic deontic detachment. Synthese 54, pp. 295–318.
McNamara, P. (2010). Deontic Logic. In E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy.
Montague, R. (1968). Pragmatics. In R. Klibansky (ed.) Contemporary Philosophy: Vol. 1: Logic and the Foundations of Mathematics, pp. 102–122, La Nuova Italia Editrice, Firenze, (1968).
Mott, P. L. (1973). On Chisholm’s paradox. Journal of Philosophical Logic 2, pp. 197–211.
Meyer, J.-J. C. (1988). A Different Approach to Deontic Logic: Deontic Logic Viewed as a Variant of Dynamic Logic. Notre Dame Journal of Formal Logic, Vol. 29, Number 1.
Niles, I. (1997). Rescuing the Counterfactual Solution to Chisholm’s Paradox. Philosophia, Vol. 25, pp. 351–371.
Powers, L. (1967). Some Deontic Logicians. Noûs 1, pp. 361–400.
Prakken, H. and Sergot, M. (1996). Contrary-to-duty obligations. Studia Logica, 57, pp. 91–115.
Rescher, N. (1958). An axiom system for deontic logic. Philosophical studies, Vol. 9, pp. 24–30.
Rönnedal, D. (2009). Dyadic Deontic Logic and Semantic Tableaux. Logic and Logical Philosophy, Vol. 18, No. 3–4, pp. 221–252.
Rönnedal, D. (2012). Extensions of Deontic Logic: An Investigation into some Multi-Modal Systems. Department of Philosophy, Stockholm University.
Rönnedal, D. (2016). Counterfactuals in Temporal Alethic-Deontic Logic. South American Journal of Logic. Vol. 2, n. 1, pp. 57–81.
Rönnedal, D. (2018). Temporal Alethic Dyadic Deontic Logic and the Contrary-to-Duty Obligation Paradox. Logic and Logical Philosophy. Vol. 27, No 1, pp. 3–52.
Rönnedal, D. (2019). Contrary-to-duty paradoxes and counterfactual deontic logic. Philosophia, 47 (4), pp. 1247–1282.
Thomason, R. H. (1981). Deontic Logic as Founded on Tense Logic. In R. Hilpinen (ed.) New Studies in Deontic Logic, D. Reidel, Dordrecht, pp. 165–176.
Thomason, R. H. (1981b). Deontic Logic and the Role of Freedom in Moral Deliberation. In R. Hilpinen (ed.) New Studies in Deontic Logic, D. Reidel, Dordrecht, pp. 177–186.
Tomberlin, J. E. (1981). Contrary-to-duty imperatives and conditional obligations. Noûs 15, pp. 357–375.
van Eck, J. (1982). A system of temporally relative modal and deontic predicate logic and its philosophical applications. Logique et Analyse, Vol 25, No 99, pp. 249–290, and No 100, pp. 339–381.
Original publication, as dissertation, Groningen, University of Groningen, 1981.
van der Torre, L. W. N. and Tan, Y. H. (1999). Contrary-To-Duty Reasoning with Preference-based Dyadic Obligations. Annals of Mathematics and Artificial Intelligence 27, pp. 49–78.
Wieringa, R. J. & Meyer, J.-J. Ch. (1993). Applications of Deontic Logic in Computer Science: A Concise Overview. In J.-J. Meyer and R. Wieringa, editors, Deontic Logic in Computer Science: Normative
System Specification, pp. 17–40. John Wiley & Sons, Chichester, England.
van Fraassen, C. (1972). The Logic of Conditional Obligation. Journal of Philosophical Logic 1, pp. 417–438.
van Fraassen, C. (1973). Values and the Heart’s Command. The Journal of Philosophy LXX, pp. 5–19.
von Kutschera, F. (1974). Normative Präferenzen und bedingte Gebote. I Lenk, H., & Berkemann J. (eds.). (1974), pp. 137–165.
von Wright, G. H. (1964). A new system of deontic logic. Danish yearbook of philosophy, Vol. 1, pp. 173–182.
Author Information
Daniel Rönnedal
Email: [email protected]
University of Stockholm
(Visitado 1 veces, 1 visitas hoy) | {"url":"https://argumentame.com/contrary-to-duty-paradox/","timestamp":"2024-11-03T03:26:49Z","content_type":"text/html","content_length":"223636","record_id":"<urn:uuid:0f180e3c-3adb-4e71-aa90-504ee33f04ab>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00024.warc.gz"} |
How to find square root in java - Java Program to Find Square and Square root of a Number - BTech Geeks
How to find square root in java – Java Program to Find Square and Square root of a Number
How to find square root in java: In the previous article, we have seen Java Program to Find Logarithm of a Number
In this article we are going to see how to find square and square root of a number using java programming language.
Java Program to Find Square and Square root of a Number
How to square root a number in java: Before jumping into the program directly let’s know what is square and square root of a number.
Square: When we multiply the number with itself then we get the square value.
For example:
• If number is 10 then square of 10 is 100.
• If number is 5 then square of 5 is 25.
• If number is 16 then square of 16 is 256.
Square root: It is the number which is multiplied with itself to find the actual number.
For example:
• If number is 100 then square root of 100 is 10.
• If number is 5 then square root of 5 is 2.236(approx.)
• If number is 16 then square root of 16 is 4.
Let’s see different ways to find square and square root of a number.
Method-1: Java Program to Find Square By Multiplying With Itself and Square root By Using Math.sqrt() Method
1. Create scanner class object.
2. Take user input for the number.
3. To find the square multiply the number with itself.
4. To find square root use Math.sqrt() method
import java.util.Scanner;
public class Main
public static void main(String[] args)
// find the square of a number
Scanner sc = new Scanner(System.in);
System.out.print("Enter a number to find square: ");
double number = sc.nextDouble();
double square = number * number;
System.out.println("The square of " + number + " is " + square);
// find the square root of a number
System.out.print("Enter a number to find square root: ");
number = sc.nextDouble();
square = Math.sqrt(number);
System.out.println("The square root of " + number + " is " + square);
Method-2: Java Program to Find Square By Multiplying With Itself and Square root By Using Babylonian Method
1. Create scanner class object.
2. Take user input for the number.
3. To find the square multiply the number with itself.
4. To find square root use Babylonian method.
import java.util.Scanner;
public class Main
public static void main(String[] args)
// find the square of a number
Scanner sc = new Scanner(System.in);
System.out.print("Enter a number to find square: ");
double number = sc.nextDouble();
double square = number * number;
System.out.println("The square of " + number + " is " + square);
// find the square root of a number
System.out.print("Enter a number to find square root: ");
number = sc.nextDouble();
//finding square root by calling square_Root() user defined method
System.out.println("The square root of " + number + " is " + square_Root(number));
//method to find square root
public static double square_Root(double num)
double a = num;
double b = 1;
double e = 0.000001;
while (a - b > e)
a = (a + b) / 2;
b = num / a;
return a;
Enter a number to find square: 5
The square of 5.0 is 25.0
Enter a number to find square root: 16
The square root of 16.0 is 4.000000000000051
Are you a job seeker and trying to find simple java programs for Interview? This would be the right choice for you, just tap on the link and start preparing the java programs covered to crack the
Related Java Programs: | {"url":"https://btechgeeks.com/java-program-to-find-square-and-square-root-of-a-number/","timestamp":"2024-11-03T00:47:06Z","content_type":"text/html","content_length":"63384","record_id":"<urn:uuid:f2289efc-7c36-46c6-8125-e388189436a1>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00632.warc.gz"} |
Artifacts Overview
Python SDK for Artifacts Overview¶
Artifacts live in a Comet Workspace and are identified by their name. Each artifact can have multiple versions identified by their version string number.
How to add an asset to an Artifact¶
To log an artifact, you need to first create an Artifact() instance. When you create such an Artifact instance and don't provide an artifact version number string, a new version will be automatically
created for you. If it is the first time you have logged an Artifact with this name in this particular Workspace, it will receive the version string number "1.0.0". Otherwise, it will receive the
next major version number. For example, if you log a new version of an artifact that currently has a version of "2.5.14", then the new version number will be "3.0.0".
After creating an Artifact instance, you then can add asset files or a remote URL to the Artifact. When you are ready to send the Artifact to the cloud, you will log it with Experiment.log_artifact
(ARTIFACT). You can also add aliases when creating a new Artifact() with the aliases=["alias1", "alias2"] argument.
Let's take a look at a specific example.
NOTE: all of these examples assume that you have set your Comet API key via one of the methods. See Python Configuration for more information.
```python from comet_ml import Artifact, Experiment
experiment = Experiment() artifact = Artifact("artifact-name", "dataset") artifact.add("./local-file")
experiment.log_artifact(artifact) experiment.end() ```
In the above example, we create an Artifact with the name "artifact-name" and type "dataset". These are completely arbitrary strings. However, it would be useful to you to name the artifacts in a way
that will make sense to you. Typical artifact types could be "dataset", "image", "training-data", "validation-data", "testing-data", etc.
You can update all the Artifact attributes before logging the artifact object:
```python import datetime from comet_ml import Artifact, Experiment experiment = Experiment() artifact = Artifact("artifact-name", "dataset")
artifact.name = "my-specific-artifact-name" artifact.artifact_type = "training-dataset" artifact.metadata.update({"current_date": datetime.datetime.utcnow().isoformat()}) artifact.version = "1.4.5"
artifact.aliases |= {"staging"} # Aliases are stored a set artifact.tags |= {"customer:1"} # Tags are stored a set ```
How to add a remote asset to an Artifact¶
Sometimes you might want to log a reference to an asset rather than the asset itself. For example, consider that you have a very large dataset (say, hundreds of gigabytes) that lives in an S3 storage
bucket. In this case, it would make sense to log this as a "remote" asset. A remote asset URI can be any string; no particular format is expected.
```python from comet_ml import Artifact, Experiment
experiment = Experiment() artifact = Artifact("artifact-name", "artifact-type") artifact.add_remote( "s3://bucket/dir/train.csv", )
experiment.log_artifact(artifact) experiment.end() ```
How to get a Logged Artifact Version¶
You can retrieve a logged artifact from any workspace that you have permission to access, and a workspace name with the Experiment.get_artifact() method:
python logged_artifact = experiment.get_artifact(NAME, WORKSPACE, version_or_alias=VERSION_OR_ALIAS)
You can retrieve a logged artifact in three ways in the Python SDK:
1. Get the latest artifact version by leaving out the version and alias arguments
2. Get a specific artifact version by passing the version argument
3. Get an aliased artifact version by passing the alias argument
The Experiment.assets attribute contains all the logged assets for a given artifact version. You can distinguish between remote and non-remote assets using the remote attribute of each asset.
```python from comet_ml import Experiment
experiment = Experiment() logged_artifact = experiment.get_artifact( "artifact-name", WORKSPACE, )
for asset in logged_artifact.assets: if asset.remote: print(asset.link) else: print(asset.logical_path) print(asset.size) print(asset.metadata) print(asset.asset_type) print(asset.id) print
(asset.artifact_version_id) print(asset.artifact_id) ```
How to download a Logged Artifact¶
Downloading a logged artifact gives you all of the non-remote assets on your local disk. This will also record that the new experiment has accessed the artifact, for tracking the data flow in your
```python from comet_ml import Experiment
experiment = Experiment() logged_artifact = experiment.get_artifact( "artifact-name", WORKSPACE, )
Download the artifact:¶
local_artifact = logged_artifact.download("/data/input") for asset in local_artifact.assets: if asset.remote: print(asset.link) else: print(asset.logical_path) print(asset.size) print(asset.metadata)
print(asset.asset_type) print(asset.id) print(asset.artifact_version_id) print(asset.artifact_id) ```
This will download only non-remote assets. You can access remote assets through the assets attribute of the logged artifact object and retrieve a remote asset link through the link attribute.
Update an Artifact Version¶
Here is how you can retrieve an existing artifact version, add a new file, compute the new version and log it:
```python from comet_ml import Experiment
experiment = Experiment()
logged_artifact = experiment.get_artifact("artifact-name", WORKSPACE)
local_artifact = logged_artifact.download("/data/input")
local_artifact.add("./new-file") local_artifact.version = logged_artifact.version.next_minor()
experiment.log_artifact(local_artifact) ```
See Also¶
Some related topics: | {"url":"https://www.comet.com/docs/python-sdk/artifacts-overview/","timestamp":"2024-11-10T11:15:42Z","content_type":"text/html","content_length":"75630","record_id":"<urn:uuid:84aef182-a21a-4778-86ec-fcf0ee7f2fb1>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00233.warc.gz"} |
Difference Between Codomain and Range (With Table)
Mathematics has always been fun for someone who has quite an interest in it. The subject has many branches such as – geometry, algebra, probability, statistics, topology, mathematical logic, number
theory, foundation, and many more. The terms codomain and range are the two terms that are studied in the sets and comes under the branch of mathematical logic.
Codomain vs Range
The main difference between Codomain and Range is that Codomain determines the possible collective values which will come out, whereas Range determines you the actual value which will come out as a
result. Codomain is said to be simple integers, while Range is said to be only even integers.
Codomain is to be the possible values of the function but also affect the answer of the function. They are said to be simple integers and never have restrictions on the size of the sets in a
function. Codomain for the notation of the triple function: (A BG) – A is the domain of the function f, B is said to be the Codomain, and G is its graph.
The range is said to be the exact possible values of a y function but never affect the result of the function. The range is considered to be only the even integers. If we make changes in the values
of the domain, then the values of the range also change concerning it.
Comparison Table Between Codomain and Range
Parameters of Comparison Codomain Range
Definition Codomain is described as all possible sets of values that will result from a given function. The Range is described as all the actual values of a function that will result.
Also Known As Codomain is also known as the definition of a function. The Range is also known as the image of the function.
Purpose Codomain restricts the output of the given function. The Range does not restrict the output of the given function.
Set Size No restrictions It is said to be equal or smaller than the codomain set.
Effect on Answer It has a direct effect on the answer. It does not have a direct effect on the answer.
What is Codomain?
In Mathematics, there are many terms related to the sets that are important to know, and Codomain is among them. It doesn’t have an elaborate explanation but still can be distinguished slightly from
the other terms.
To define what is codomain-it can be stated as the possible values of the given function, which will come out as a result of the respective equation. Codomain is simply integers that have no
restrictions on the size of the set value. Codomain is sometimes referred to as the definition of the function.
Any changes in the domain will not change codomain, meaning if the domain values are changed, then it will not affect the possible values of the codomain, which will come out as a result. Also, it
has been seen that the values of the codomain restrict the output of the given function, and the codomain is said to be the “mapping to” of the function from the domain.
What is Range?
The word Range is used for wider meanings. It may be used in statistics and has an entirely different meaning. And it does mean the difference between the higher and the lower values of the given set
of data. While it meant the difference for the range of the function, that is, it gives you all the possible values coming out as a result.
For a given function, there is only one range, which does not restrict the output of the function of the given equation. And is also known as the image of the function. The size of the set of the
range is either equal or smaller than the size of the set of the codomain of the set.
The Range is also considered to be the subset of the codomain, and any changes in the values of the domain affect the values of the range. Unlike Codomain, the Range is not a mapping from the domain.
It is just the image of all the value that comes out in codomain. It is believed that the range is only the outputted values and does not have any effect.
Main Differences Between Codomain and Range
1. Codomain can be defined as the set of the possible values of a function, while Range can be defined as the most accurate value of a function.
2. Codomain can also be known as the definition of a function, whereas the Range is also known to be the image of a function.
3. It is found that codomain can restrict the output of the function while it is contrasting for the Range as it does not restrict the output of the function.
4. For Codomain, the size of the set is not defined; therefore, no restrictions at all, whereas for Range, the size of the set is said to be equal to or smaller than the codomain set.
5. Codomain has a direct effect on the answer, while the Range does not play this important role and thus, does not affect the answer.
The above two topics are related to mathematics. Both the terms are although different from each other and slightly dependent on one another, but pointing out the difference in two is a major work as
both the terms have the slightest difference and only be distinguished by someone who is a keen lover or expert of mathematics.
Codomain was talking about the exact possible value and is also known to be the definition of the function. It does restrict the output of the function. Also, codomain does not have any size
specified for the sets of the function, and any changes in codomain directly affect the answer.
Contrastingly, not restricting the output and oppositely restricting the size of set Range talks about the possible values and not the exact one like Codomain. And is also known as the image of the
1. https://ijmmu.com/index.php/ijmmu/article/view/1818
2. https://iopscience.iop.org/article/10.1088/1742-6596/1657/1/012073/meta
3. https://www.sciencedirect.com/science/article/pii/S0304397515003151
4. https://www.sciencedirect.com/science/article/abs/pii/S0306261919305446 | {"url":"https://www.nftartranking.com/difference-between-codomain-and-range-with-table/","timestamp":"2024-11-15T02:57:19Z","content_type":"text/html","content_length":"106163","record_id":"<urn:uuid:2bc8a598-5ffb-430e-a279-83b59deed1f7>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00138.warc.gz"} |
Our users:
I cannot afford to pay separate tutoring hours for my twins, because there are so many different aspects of Algebra that they don't understand, but all has been taken care of; Algebrator does the job
better than any tutors I had hired. Now I can dare to hope that my boys will get into a college.
Carl J. Oldham, FL
I really like your software. I was struggling with fractions. I had the questions and the answers but couldnt figure how to get from one to other. Your software shows how the problems are solved and
that was the answer for me. Thanks.
Jessica Simpson, UT
As a mother of a son with a learning disability, I was astounded to see his progress with your software. Hes struggled for years with algebra but the step-by-step instructions made it easy for him to
understand. Hes doing much better now.
Sarah Johnston, WA
Students struggling with all kinds of algebra problems find out that our software is a life-saver. Here are the search phrases that today's searchers used to find our site. Can you find yours among
Search phrases used on 2013-12-28:
• examples of solving systems of linear inequalities by graphing 8th grade math
• free math worksheets commutative property
• largest common factor calculator
• how to solve list intercepts for the graph
• easy to use trigonometry formulas charts
• write in simplified radical form
• formula for triangle algebra
• solving quadratic equations by extracting the square root
• polynomials and exponents calculator
• using ti-83 plus for cube root
• squaring decimals
• algebra calculator
• probability lesson plans for grade 11
• maths sums like average, percentage , +promblems sums for class 4
• online glencoe algebra 2 workbook
• Least common denominator printables
• solving linear equations worksheet
• algebra non calculator practice
• variable exponents
• algebra 1 distributive property calculator
• algerbra media help
• software company entrance test papers aptitude test
• fractions and decimals from least to greatest
• simple and practical algebra software
• Mcgraw hill GED test papers ,online
• algebraically problems
• how to write trig program for ti 84
• free printable algebra quizzes
• expressions with radicals worksheets
• math trivia with answer
• free download book= principle of accounting
• write quadratic equation in standard form
• the hardest math problem
• math area formula questions children
• how quadratic formula programming for ti 84
• elementary algebra made easy
• Saxon Math Answers
• math scale factor
• equation unknowns worksheets
• world's hardest math problem
• simplify rational expressions calculator
• math poem calculus algebra
• university of phoenix algebra 1a test cheats
• linear equation plotting ordered pairs solver
• standardized questions algerbra 1
• solving binomial coefficient
• algerbr
• math homework equation answer
• help solving quadratic exponential equations
• conceptual physics prentice hall worksheets
• algebra 1 honors 8th grade math work book
• ignore punctuation in java strings
• rationalize polynomials
• free online ks2 revision
• java bigdecimal trigonometry
• toughest, funniest math problems in the world
• do my algebra
• VIII SAMPLE PAPERS
• interactive factoring trinomials
• hard math chemistry equation
• free problem solver for square root
• ti 84 graph find y value
• step by step math problems
• Mixed number to a Percent Calculator
• gcf of 3 numbers
• 7th grade math solving inequalities
• free step by step integral solver
• software
• Math Worksheets Permutations
• log base 10 on ti-83
• radical square root calculator
• how to find y-intercept generator
• second order homogeneous differential equations
• how to convert decimals to fractions ti 89 titanium
• simply radical expression
• how to find the value of an exponent variable
• algebra help that solves homework
• fun way of teaching nth term of a linear sequence to a higher set
• square root method
• simultaneous quadratic equations Quadratic form
• free KS3 maths coursework and projects
• factor calculator online quadratic polynomial
• download ebook for company & cost accounts by maheshwari
• cube roots on ti -83 plus
• solve equations algebraically ti 83
• free online test papers based on volume [maths]
• math homework help with scale factors
• Square Roots: What is 7.5 squared?
• matlab function to solve second order differential
• ny education for grade 4 maths faction free lesson
• solving complex numbers with a TI-83 | {"url":"https://mathworkorange.com/math-help-calculator/trigonometry/scientific-calculator-for.html","timestamp":"2024-11-03T04:04:24Z","content_type":"text/html","content_length":"87553","record_id":"<urn:uuid:b87e76f1-8ecb-41d7-8541-0c051fd2f59a>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00158.warc.gz"} |
The Stacks project
Lemma 57.16.5. Let $k$ be an algebraically closed field. Let $X$ be a smooth proper scheme over $k$. Let $f : Y \to S$ be a smooth proper morphism with $S$ of finite type over $k$. Let $K$ be the
Fourier-Mukai kernel of a relative equivalence from $X \times S$ to $Y$ over $S$. Then $S$ can be covered by open subschemes $U$ such that there is a $U$-isomorphism $f^{-1}(U) \cong Y_0 \times U$
for some $Y_0$ proper and smooth over $k$.
Comments (0)
Post a comment
Your email address will not be published. Required fields are marked.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
All contributions are licensed under the GNU Free Documentation License.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder,
this is tag 0G0S. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 0G0S, in case you are confused. | {"url":"https://stacks.math.columbia.edu/tag/0G0S","timestamp":"2024-11-03T15:39:39Z","content_type":"text/html","content_length":"16374","record_id":"<urn:uuid:f928a805-8733-4d81-97bd-662816f6634f>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00525.warc.gz"} |
wp_image_matches_ratio() | Function | ClassicPress Documentation
wp_image_matches_ratio( int $source_width, int $source_height, int $target_width, int $target_height )
Helper function to test if aspect ratios for two images match.
(Required) Width of the first image in pixels.
(Required) Height of the first image in pixels.
(Required) Width of the second image in pixels.
(Required) Height of the second image in pixels.
(bool) True if aspect ratios match within 1px. False if not.
File: wp-includes/media.php
function wp_image_matches_ratio( $source_width, $source_height, $target_width, $target_height ) {
* To test for varying crops, we constrain the dimensions of the larger image
* to the dimensions of the smaller image and see if they match.
if ( $source_width > $target_width ) {
$constrained_size = wp_constrain_dimensions( $source_width, $source_height, $target_width );
$expected_size = array( $target_width, $target_height );
} else {
$constrained_size = wp_constrain_dimensions( $target_width, $target_height, $source_width );
$expected_size = array( $source_width, $source_height );
// If the image dimensions are within 1px of the expected size, we consider it a match.
$matched = ( abs( $constrained_size[0] - $expected_size[0] ) <= 1 && abs( $constrained_size[1] - $expected_size[1] ) <= 1 );
return $matched;
Version Description
WP-4.6.0 Introduced. | {"url":"https://docs.classicpress.net/reference/functions/wp_image_matches_ratio/","timestamp":"2024-11-08T06:10:30Z","content_type":"application/xhtml+xml","content_length":"20140","record_id":"<urn:uuid:5ad4296e-0801-486d-86f8-9ecdc723c0d5>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00118.warc.gz"} |
Form Factors and Long-Distance Effects in $B\to V(P) \ell^+\ell^-$ and $B\to V γ$ PDF
SI-HEP-2011-01 January 11, 2011 Form Factors and Long-Distance Effects 1 in B V (P)ℓ+ℓ and B V γ − 1 → → 0 2 n a J 2 Alexander Khodjamirian 1 ] h Theoretische Physik 1, Fachbereich Physik,
Universit¨at Siegen, p D-57068 Siegen, Germany - p e h [ 1 v 8 Ioverviewthehadronicinputfortheexclusiveflavour-changingneutral- 2 ∗ current B-decays with a vector (V = K ,ρ) or pseudoscalar (P = K,π)
3 2 meson in the final state. After presenting the current status of B P,V . → 1 form factors, I discuss the estimate of the charm-loop effect in B 0 K(∗)ℓ+ℓ− and B K∗γ. → 1 → 1 : v i X r a PRESENTED
AT CKM2010, the 6th International Workshop on the CKM Unitarity Triangle, University of Warwick, UK, 6-10 September 2010 1 Introduction The exclusive B V(P)ℓ+ℓ− and B Vγ decays, with a vector (V =
K∗,ρ,...) → → or pseudoscalar (P = K,π,...) meson in the final state are important for the search for flavour-changing new physics. I will overview the current status of the hadronic input for these
decays. In Standard Model (SM), the CKM favoured B K(∗)ℓ+ℓ− → decay amplitudes reduce to the matrix elements A(B K(∗)ℓ+ℓ−) = K(∗)ℓ+ℓ− H B (1) eff → h | | i of the effective Hamiltonian 4G 10 F ∗ H = V
V C (µ)O (µ), (2) eff − √2 tb tsX i i i=1 ∗ where the small V V part is neglected, and the dominant b s operators ∼ ub us → are O and O , with the Wilson coefficients C (m ) 4.2, C (m ) 4.4 and 9,10 7
9 b 10 b ≃ ≃ − C (m ) 0.3, respectively (for a review see, e.g., [1]). 7 b ≃ − The hadronicmatrixelements ofthese operatorsfactorize, e.g., thecontributionof the operator O = (s γ b )(ℓγρℓ) to the
decay amplitude (1) reduces to the hadronic 9 L ρ L matrix elements K(∗)(p) s γ b B(p+q) , parameterized in terms of the B K(∗) L ρ L h | | i → form factors depending on q2, the momentum transfer to
the lepton pair. For the B Vγ decay one correspondingly needs the form factors at q2 = 0. → In order to access the flavour-changing neutral current (FCNC) interaction en- coded in (2) and to trace and
/or constrain new physics, one has to compare the measured exclusive decay observables with the SM predictions. For the latter, an accurate knowledge of the B P,V form factors is necessary but not
sufficient. Important are also the contrib→utions to the B V(P)ℓ+ℓ− and B Vγ decay → → amplitudes due to the operators O , which have to be analyzed one by one (see 1,2,...,6,8g e.g. [2]) and added to
the dominant FCNC contributions. Especially important are the current-current operators O(c) = (s γ c )(c γρb ) and O(c) = (sjγ ci )(ci γρbj) 1 L ρ L L L 2 L ρ L L L with large Wilson coefficients C (m
) 1.1, C 0.25. Combined with the e.m. 1 b 2 ≃ ≃ − interactions of quarks and leptons, they generate b s transitions with intermediate → c-quark loops. The resulting hadronic matrix elements contain
nonfactorizable parts, not reducible to B P,V form factors. In the following Sect. 2, I will discuss the → form factors and in Sect. 3 the results of the recent analysis [3] of the charm-loop effect
in B K(∗)ℓ+ℓ− and B K∗γ. → → 2 B P,V form factors → The form factors of s γ b and d γ b currents involved in B P(V)ℓ+ℓ− de- L µ L L µ L → cays are related, via SU(3) and isospin symmetry,
respectively, to the form factors fl 1 of u γ b current. The latter can in principle be determined from the measured L µ L B π(ρ)ℓν semileptonic widths, with V taken from the inclusive semileptonic ℓ
ub me→asurements. However, for the B K|(∗) |form factors this way of determination → cannot be sufficiently accurate, because SU(3) is violated up to 20% (like in the fl ∼ ratios f /f , f (0)/f (0)).
The heavy-quark limit (m ,m ) provides an- K π DK Dπ b c → ∞ other useful flavour symmetry, predicting nontrivial relations between B and D − − meson hadronic amplitudes. Hence, in principle, one can
try to obtain the B P,V → form factors employing the D P,V form factors extracted from the exclusive → semileptonic D decays. Again, to achieve a reasonable accuracy, one has to assess the symmetry
violating, 1/m corrections. They are generally not small. E.g., QCD c,b ∼ calculations of the B and D decay constants yield f f 200 MeV, whereas in B D ∼ ∼ the heavy-quark limit f /f m /m √3. We come
to a conclusion that in D B b c ∼ q ∼ order to reach < 20% accuracy one needs a direct calculation of the B P,V form → factors in full QCD. Currently, lattice QCD with 3 dynamical flavours has
achieved a 10% accuracy, ∼ but only for the B π form factors in the region of large q2 > 15 GeV2. The → future goal is to reach 5% accuracy (see e.g.,[4]). The B K form factors have → been obtained
recently [5] in the quenched approximation. The lattice calculations ∗ of B V form factors (V = ρ,ω,K ) are complicated due to instability of vector → mesons, and only earlier results in quenched
approximation are available. In the important region of small and intermediate q2 (large and moderate recoil of the final meson) the B P,V form factors are calculated from QCD light-cone → sum rules
(LCSR). This technique is used for finite quark masses and, therefore takes into account the flavour-symmetry violation. The key nonperturbative objects are the light-cone distribution amplitudes (DAs)
of the light P- or V-meson. The LCSR method and results for B π form factors are overviewed in [6]. The most recent → calculation for the B π form factor is in [7], the B K form factors were updated
→ → in [8]. A typical uncertainty is about 15%, with a little room for improvement. The same method and input successfully reproduce the D π and D K form factors, → → ∗ as shown in [9]. The LCSR
results for B V form factors (V = ρ, ω, K ,φ) are (s) → available from [10]. Note that the instability of the vector meson V ( the ρ ππ or ∗ → K Kπ widths) are neglected also in the LCSR calculation.
→ Alternative LCSR’s for B P,V form factors are obtained with B-meson distri- → bution amplitudes [11] taken as a nonperturbative input. Here all pseudoscalar and vector mesons are treated on equal
footing, being interpolated by a corresponding light-quark current. The overall accuracy of these sum rules is somewhat less than of the conventional LCSR’s with DA’s of light mesons. The gluon
radiative corrections are not yet calculated and the uncertainties of the parameters of B-meson DA’s are still large. Among the non-lattice tools for B P,V form factors are also effective theories →
(HQET, QCD factorization, SCET) where non-trivial relations between the form 2 factors in the large recoil limit are predicted. A comprehensive analysis of B K∗ℓ+ℓ− in terms of this approach can be
found, e.g., in [2], where the contributio→ns with hard gluons to the decay amplitudes are identified and calculated. The soft B P,V form factors defined in the heavy-quark and large-recoil limit and
the → B,P,V-meson DA’s represent the external input. LCSR in SCET [12] can be used to calculate the soft form factors. Further increasing the accuracy in the effective theories demands taking into
account the power-suppressed contributions. Inadditiontothecalculationalmethods, theanalyticalpropertiesoftheB P,V → form factors are employed, in the form of “series-parametrization”. The idea is to
map the complex q2-plane onto the plane of the new variable z(q2), so that z 1 in | | ≪ semileptonic region 0 < q2 < (m m )2. Hence, a Taylor expansion around z = 0 B P(V) −
describestheformfactorwithareasonableaccuracy, allowingonetointer/extrapolate the calculated form factor beyond the region of validity of lattice QCD or LCSR. The latest version of this
parameterization was introduced in [13] where one can find all details. A recent analysis of all form factors relevant for B K(∗)ℓ+ℓ−, combining → LCSR and available lattice QCD results with series
parameterization can be found in [14]. Summarizing, the current uncertainty of B P,V form factors is 12 15%, → − whereas B V form factors have an additional “systematic error” related to the →
instability of vector mesons. 3 Charm loops in B K( )ℓ+ℓ ∗ − → γγ∗∗ cc b s B¯ K¯ K (a) (b) Figure 1: Charm-loop effect in B K(∗)ℓ+ℓ−: → (a)-the leading-order factorizable contribution; (b)
nonfactorizale soft-gluon emission, (c),(d)-hard gluon exchange. (c) (d) In addition to the FCNC contributions containing B P,V form factors, the B V(P)ℓ+ℓ− and B Vγ decay amplitudes are
“contam→inated” by the effects of → → 3 weak interaction combined with e.m. interaction. Let us discuss the most important “charm-loop” effect in B K(∗)ℓ+ℓ− and B K∗γ, generated by the current- (c) → →
current operators O , acting together with the c-quark electromagnetic current (see 1,2 Fig. 1). In B K(∗)ℓ+ℓ−, this mechanism involves an intermediate “charm-loop”, → ∗ coupled to the lepton pair
via the virtual photon. In B K γ, the charm-loop is → also possible if there is an additional gluon exchange with the rest of quarks. The simple c-quark loop diagram (Fig. 1a) is usually included in
the factorization formula for B K(∗)ℓ+ℓ−. In addition, hard-gluon exchanges between the c-quark → loop and the rest of the diagram (Fig. 1c,d) are taken into account, together with other perturbative
nonfactorizable effects (see e.g., [2]). One generally predicts these effects to be small, if q2 is far below the charmonium region. The natural question is:
howimportantarethecontributionsofthesoftgluonsemittedfromthec-quarkloop? (Fig.1b) A related question concerns the validity of the approximation “c-quark-loop plus corrections” at large q2,
approaching the charmonium resonance region. Note thatatq2 = m2, whereψ = J/ψ,ψ(2S),...isoneofthevectorcharmoniumstates, the ψ processB K(∗)ℓ+ℓ− transformsintoanonleptonicweakdecayB ψK(∗), followed →
→ by the leptonic annihilation of ψ. To avoid this “direct” charmonium background, the q2-intervals around J/ψ and ψ(2S) are subtracted from the measured lepton-pair mass distributions in B K(∗)ℓ+ℓ−.
Nevertheless, the intermediate and/or virtual → cc states contribute outside the resonance region and their effect has to be accurately estimated. In [3] these two questions were addressed, employing
the expansion near the light- (c) cone of the product of the two operators: O and c-quark e.m. current. As 1,(2) demonstrated in detail in [3], this operator-product expansion is valid at q2 4m2, ≪ c
provided 2m Λ . The leading-order term of this expansion is reduced to the c QCD ≫ simple cc-loop, resulting in the well-known loop function g(m2,q2) multiplying the c local operator s γρb . The
nontrivial effect is related to the one-gluon term which L L yields [3] a convolution (in ) (q) = dωI (q,m ,ω)s γρδ[ω +D ]G b (3) Oµ Z µραβ c L − 2 αβ L e e of a nonlocal quark-antiquark-gluon
operator with the calculable coefficient function I . In the above, n is the light-cone projection (defined so that q mbn in the µραβ +D ∼ 2 + rest frame of B) of the covariant derivative acting on the
gluon field-strength tensor andG = 1ǫ Gστ. Theexplicit expression forthiscoefficient functionispresented αβ 2 αβστ in [3]e. As explained there in more detail, two and more soft-gluon contributions are
suppressed by additional powers of 1/(4m2 q2) with respect to the leading one-gluon c− term. The operator in (3) results from an effective resummation of the tower of local operators. Inthelocallimit,
atq2 = 0, werecover thelocaloperatorofthecharm-loop with soft gluon, taken into account first in [15] for the B X γ inclusive width and s ∗ → in [16] for the B K γ amplitude. In the adopted
approximation, the calculation of → 4 L L 0.5 M12.5 K ® *,2.0 B K 0.0 ®1.5 c, B (cid:143)Hc c,1.0 C9-0.5 (cid:143)Hc0.5 D C9 -1.0 D0.0 1 2 3 4 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 q2HGeV2L q2HGeV2L Figure
2: The charm-loop effect in B Kℓ+ℓ−(left panel) and B K∗ℓ+ℓ−(right → → panel, one of the three amplitudes) expressed as a correction to the Wilson coefficient C (solid), including the nonfactorizable
soft-gluon contribution (dashed) with the 9 shaded region indicating the estimated uncertainty and the factorizable contribution (dash-dotted). the charm-loop effect at small q2 is then reduced to the
two hadronic matrix elements. One of them is factorizable and expressed via B K(∗) form factors. The soft-gluon → emission contribution yields a hadronic matrix element of the nonlocal operator (3).
This matrix element is calculated in [3] using the LCSR method [11] where the B- meson DA’s (approximated in HQET) are used as a universal nonperturbative input. The result for the charm-loop
contribution to A(B Kℓ+ℓ−) including the soft- → gluon part is expressed in the form of a (process- and q2-dependent) correction to the known Wilson coefficient C . 9 4 L K 2 ® B Figure 3: The charm
loop contribution c, 0 to the Wilson coefficient C for B Kl+l− (cid:143)Hc at q2 below the open charm9thresho0l→d, DC9-2 obtained from the dispersion relation -4 fitted to the OPE result at q2 4m2. ≪ c
2 4 6 8 10 12 The central values are denoted by dashed line, q2HGeV2L shaded area indicates the estimated uncertainties. The calculated function ∆C(cc,B→K)(q2) plotted in Fig. 2[left] is valid at
small 9 q2 4m2. The numerical analysis reveals an important role of the soft-gluon part. ≪ c It has an opposite sign with respect to the factorizable loop term. The result ∆C(cc,B→K)(q2 = 0,µ m ) =
0.17+0.09, (4) 9 ∼ b −0.18 has to be added to C (µ = m ). 9 b 5 3.0 -LΜ2.5 0.4 +Μ *02HLH®NqBK012..50 -2+-LHLΜΜGeV 00..02 1.0 *0K ® B0 0.5 2Hq-0.2 2 4 6 8 10 12 d q2HGeV2L (cid:144)AFB d-0.4 2 4 6 8
q2HGeV2L Figure 4: left:The differential width of B K∗µ+µ− normalized at q2 = 1.0 GeV2, 0 → including the charm-loop effect calculated with the central values of input (solid, the shaded area indicates
estimated uncertainties) and without this effect (dashed); right: The forward-backward asymmetry for B K∗µ+µ− decay. 0 → For B K∗ℓ+ℓ− the effect is more pronounced and kinematically enhanced at → small
q2 (see Fig. 2[right]). As a by-product of our calculation we also estimate the ∗ charm-loop effect in B K γ, where the factorizable loop vanishes and only the → nonfactorizable gluon emission
contributes. Furthermore, to access large q2 we use the dispersion relation in this variable for the invariant amplitudes determining the B K(∗) hadronic matrix elements, → saturating this relation
with the first two charmonium levels. This relation is valid at any q2, hence we can match it to the result of QCD calculation at q2 4m2. ≪ c In addition, we fix the absolute values of the residues
from experimental data on B ψK widths. The integral over the spectral density of higher states is then fitted → as an effective pole. After fixing the parameters of the dispersion relation we predict
the correction ∆C (q2) at large q2 (see Fig. 3). 9 Finally, the observables in B K(∗)ℓ+ℓ− are calculated employing the form → factors and charm-loop amplitudes (see Fig.4). There is a moderate
influence of the charm-loop effect on the position of the zero in the forward-backward asymmetry. Concluding, I would like to emphasize that a careful analysis of all other similar effects (light-quark
loops, weak annihilation etc.) including soft-gluon contributions 6 is necessary for obtaining a complete and accurate prediction for B V(P)ℓ+ℓ− and → B Vγ in SM. → ACKNOWLEDGEMENTS I am grateful to
Martin Gorbahn and Yu-Ming Wang for useful comments. This work is supported by the Deutsche Forschungsgemeinschaft under the contract No. KH205/1-2. References [1] G. Buchalla, A. J. Buras and M. E.
Lautenbacher, Rev. Mod. Phys. 68 (1996) 1125. [2] M. Beneke, T. Feldmann and D. Seidel, Nucl. Phys. B 612 (2001) 25. [3] A. Khodjamirian, T. Mannel, A. A. Pivovarov and Y. M. Wang, JHEP 1009 (2010)
089. [4] J. Shigemitsu, in these proceedings [5] A. Al-Haydari et al. [QCDSF Collaboration], Eur. Phys. J. A 43 (2010) 107. [6] P. Ball, in these proceedings [7] G. Duplancic, A. Khodjamirian, T.
Mannel, B. Melic and N. Offen, JHEP 0804, 014 (2008). [8] G. Duplancic and B. Melic, Phys. Rev. D 78, 054015 (2008). [9] A. Khodjamirian, C. Klein, T. Mannel and N. Offen, Phys. Rev. D 80, 114005
(2009). [10] P. Ball and R. Zwicky, Phys. Rev. D 71, 014029 (2005). [11] A. Khodjamirian, T. Mannel and N. Offen, Phys. Rev. D 75 (2007) 054013. [12] F. De Fazio, T. Feldmann and T. Hurth, JHEP 0802,
031 (2008). [13] C. Bourrely, I. Caprini and L. Lellouch, Phys. Rev. D 79 (2009) 013008. [14] A. Bharucha, T. Feldmann and M. Wick, JHEP 1009 (2010) 090. [15] M.B. Voloshin, Phys. Lett. B397 (1997)
275. [16] A. Khodjamirian, R. Ruckl, G. Stoll and D. Wyler, Phys. Lett. B402 (1997) 167. 7
See more | {"url":"https://www.zlibrary.to/dl/form-factors-and-long-distance-effects-in-bto-vp-ellell-and-bto-v","timestamp":"2024-11-06T04:43:45Z","content_type":"text/html","content_length":"135075","record_id":"<urn:uuid:29d2de62-0446-48f2-8bf6-2633c66d3822>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00035.warc.gz"} |
The Birthday Paradox
Last week I briefly mentioned the idea of hash collisions, using the likelihood of multiple lottery winners as an example. Let’s take some time to build up some more rigorous background in statistics
to support that discussion.
The Birthday Paradox
There’s a paradox in statistics that states, in a room of 23 people, the chance that two people have the same birthday is 50%.
It’s a non-intuitive statement, and was one of the first problems I was assigned when studying statistics in college.[ref]I was taking a course on political analysis. Understanding basic statistics
was a prerequisite to running more advanced regressions and analyses on polling data.[/ref] Our assignment seemed simple:
“How many students do you need to gather in a room such that the probability that two of them share the same birthday reaches 50%?”
Well, it seemed simple.
I might have a math degree, but this problem still took me a week to figure out. It’s fairly straight-forward once you’ve solved the problem once, but for first-time approaches it becomes tricky.
How it Works
Let’s walk through an example.
If only one person is in a room, we don’t care about probabilities (the chance they share a birthday with themselves is a nonsensical thing to calculate). Let’s move on instead.
The chances that the second person shares a birthday with the first is $\frac{1}{365}$. Not very high.
Add a third person, and things begin to become confusing. The chances that any two people share the same birthday becomes $\frac{1}{365}+\frac{1}{365}+\frac{1}{365}=\frac{3}{365}=0.82\%$. This is the
sum of the chances of each permutation of potentially shared birthdays: Persons 1 & 2, Persons 1 & 3, Persons 2 & 3.[ref]For a more rigorous calculation, we also have to factor in the probability
that all of the participants share the same birthday. My point here isn’t to provide a solid additive solution, but to illustrate the snowballing complexity of calculating our probabilities in this
Continuing on with this pattern becomes unnecessarily cumbersome very quickly as the available permutations add up. Instead, it’s easier to calculate the probability instead that, among all members
of the group, there are no shared birthdays.
Assuming there are 365 available birthdays in a year,[ref]We’re ignoring leap years for simplicity.[/ref] if a person is alone in their room we calculate the exclusivity of their birthday as $\frac
{365}{365} = 100\%$.
When a second person enters the room, there are now only 364 birthdays remaining for theirs to be exclusive. This means the probability of mutually exclusive birthdays in a population of two is $\
The pattern continues with a third person: $\frac{365}{365}\times\frac{364}{365}\times\frac{363}{365}=99.20\%$.
And a fourth: $\frac{365}{365}\times\frac{364}{365}\times\frac{363}{365}\times\frac{362}{365}=98.36\%$.
When we reach 10 people in a room, the probability that none of them share a birthday becomes $88.31\%$.
Fifteen people becomes $74.71\%$.
By the time twenty people have entered the room, the probability that their birthdays are all unique has already dropped to $58.86\%$. It’s not much more of a jump to $49.27\%$, the probability of
having absolutely unique birthdays among a group of 23 people.
Flipping the probability back around to our original question (looking at the probability at least two people share a birthday) shows that, once the population reaches 23, the likelihood that two
people in the group share a birthday is $50.73\%$.
My politics professor would be proud.
Extending the Pattern
What’s more, we can extend the pattern a bit, and quickly see that the likelihood of duplicate birthdays grows rapidly as the population increases. In fact, we reach a probability where it’s nearly
certain that two people share a birthday. With just 50 people in a population, there’s an insanely high chance two of them share a birthday: $P(50) = 1-\frac{365}{365}\times\frac{364}{365}\times\
cdots\times\frac{316}{365}=97.04\%$.[ref]Remember, our original exclusivity approach calculates the probability of having completely unique birthdays in a group. To find the probability of a
collision, we have to subtract from 1.[/ref]
In a population of 70, the probability is so close to 100%, I’d be surprised if you didn’t find two people with the same birthday.
Think about this the next time you’re at a WordCamp …
Implications in Computing
When we deal with security, we aren’t talking in terms of birthdays but in terms of hashes.
Hash functions, whether they’re cryptographically secure or not, are merely functions that map an infinite set of potential inputs to a finite space of potential outputs.
MD5, for example, only presents 120,892,581,961,462,917,4706,176 different potential hashes (that’s $32^{16}$ combinations). This might seem like a rather large number, but when you take the
statistics above into account, the complexity for generating a collision is only $2^{64}$.
Put another way, generating a collision for MD5 “is the equivalent of an exhaustive key search of 64 bits.”[ref]MD5 Discussion on Information Security Stack Exchange[/ref]
In order for a one-way hash to be secure, it needs to derive a very large key set, otherwise a birthday attack (looking for potential collisions as a way to crack the hash) renders it useless. | {"url":"https://eric.mann.blog/birthday-paradox/","timestamp":"2024-11-11T14:57:09Z","content_type":"text/html","content_length":"60150","record_id":"<urn:uuid:85ee00e5-fdf0-461d-a0db-26c661a0ecc4>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00624.warc.gz"} |
Extract the first, middle and last name in Excel 2010Extract the first, middle and last name in Excel 2010
I AM LOOKING FOR A FORMULA TO GRAB THE FIRST AND LAST NAME FOR THE EXAMPLE BELOW.
123456789 - FIRST LAST - EXTRA
I DO NOT NEED THE NUMBERS OR THE EXTRA. JUST THE first and last.
Hi can some one help me in this Conflict,
I need to Get the 1st name, Middle name, And Last Name!
I have a Formula for this. But the issue is this..
The given name are
1.Ruth Lee
2.Justin Gomez
3.Alex D. Narvaez
4.Joanne A. Smith
5.Julie Naval
Issue: Some name have middle name, and the others don't have.
what formula should I use? when there is no middle name it will be blank, but when it have a Middle name it get the middle name.
Thanks 🙂
Hi, I was wondering if I can split the first name, middle name, last name, street address, street name, city, state, and zip if they are all in one cell. For example
Rosalie Ann Mullins 18794 Jamestown Circle Northville, Michigan 48167
This would be all in cell a1. Thanks for your help. Joe...from Michigan.
Hi, I was wondering if I can extract each first, middle, last, street no, street name, city, state, and zip if they are all in one cell? For example:
Rosalie Ann Mullins 18796 Jamestown Circle Northville, Michigan 48168 (All in cell A1). Need to split them up. Thank you for your help....Joe from Michigan.
Can you help me in splitting the below name this way Mr. David Mark Brown
First Name=Mr. David
Middle Name=Mark
Last Name=Brown
Any one help to find out first name and also mid name if mid name more than 3 character
Client Name
meena devi swami
sheela devi
manju devi tak
How to extract NAME ,LOCATION & ABS from the name SHANKAR BEEJ BHANDAR-JAIPUR-CHILLI ABS
in different coloum ,Like SHANKAR BEEJ BHANDAR in one coloum,JAIPUR in another coloum & CHILLI ABS in another coloum
The Example quoted for extracting middle name is good. However, how about it the full name consisted of 4 names or 5 names, the quoted formulas will only give you the second name after space. How can
one get all middle name i.e. names between first name and last name
plz, how to only surname name three words created (Exam. PATEL NAVNIT R = PAT NAV R)
Please tell me that how can do the 3 characters in find last name
such as: Deepak Kumar Maurya then find Last name means Maurya text find.
So please you can tell me about this....!!!
• Deepak Kumar Maurya =RIGHT(A2,LEN(O20)-FIND("u",A2,FIND("u",A2,1)+1))
Pls Use this formula to solve the Question
□ sorry this is right way pls apply this formula
I need to extract the surname when there can be several name possibilities;
Mr G Right
Mr R Tyre
Mr B D Needle
The Last Names I get with the formula you give are;
D Needle
How can I fix that ?
how to remove space and country code from below data
145 2145145 result 1452145145
9987 954748 result 9987954748
9945 45 4878 result 9945454878
'91 9987 125445 result 9987125445
Hi i need to extract number from below data
'as12asd21mis45asdds2' Result 1221452
'11111asdgsadhga1225' Result 111111225
asafas1457 Result 1457
How about you have 2nd name
ex: "CHLOE RAYNE" MENDOZA DE LEON
John Andrew S. Curry
Hi All, i have a list of names Ex-
however, my friend is asking me to put one formula for all but the outcome should only be "US". in this case single words will remain same like CN,IN and for 3 words (PA-US-PA) the result should be
only US.
Pls help me with this i m nt able to put a logic here.
• Hi Ragini,
As per requirement, you don't want to extract the names however you need to use the logical statement here which will check if single word, then return as it is otherwise look for "-US*" and
return "US" as output in the cell. If this is the case, following is the formula will help you to retrieve expected output.
We assumed you have data in column A from cell A1 to A6. Put the above formula in corresponding cell (B1 to down) and it will return what you looking for.
Also, we'd recommend you to login on our official website for your simple or complicated Excel/VBA query and get instant solution for the same.
Site Admin
Hi.. I need to extract the first and last name from an email address. please help
• Hi Samson,
We assume that you have all emails like this.
To extract left name "wendy", use the following formula --> "=LEFT("wendy.poulton@eskom.co.za",SEARCH(".","wendy.poulton@eskom.co.za",1)-1)"
To extract the last name "poulton", use the following formula --> "=MID("wendy.poulton@eskom.co.za",SEARCH(".","wendy.poulton@eskom.co.za",1)+1,SEARCH("@","wendy.poulton@eskom.co.za",1)-SEARCH
You may find both formulas lengthy but you can replace the email id with the cell reference and it will extract the first and last name from email address.
Also, we will request you to please visit www.excelforum.com in case you have any Excel/VBA query. You can ask our experts and get the instant solutions for your queries.
Site Admin
this formulae is used for three names.
i have a requirements of 4 names. how to split them in 4 columns?
for example, i have First Name, Second Name, Third Name and Forth Name in one column combined all together. i want to split them in four columns.
• Hi Muhammad,
Assuming you have "First Second Third Fourth" in cell A1 and as per your request you want to split them as First, Second, Third & Fourth names in individual columns respectively so the formula
would be..
To extract
First Name --> "=LEFT(A1,SEARCH(" ",A1,1)-1)"
Second Name --> "=MID(A1,SEARCH(" ",A1,1)+1,(SEARCH(" ",A1,SEARCH(" ",A1,1)+1)-1)-SEARCH(" ",A1,1))"
Third Name ---> "=MID(A1,SEARCH(" ",A1,SEARCH(" ",A1,1)+1)+1,SEARCH(" ",A1,SEARCH(" ",A1,SEARCH(" ",A1,1)+1)+1)-SEARCH(" ",A1,SEARCH(" ",A1,1)+1)-1)"
Fourth Name ---> =RIGHT(A1,LEN(A1)-SEARCH(" ",A1,SEARCH(" ",A1,SEARCH(" ",A1,1)+1)+1))
Hope this is what you were looking for.
Also, for any simple or complicated query, please login on www.excelforum.com and ask our experts.
Happy Learning,
Site Admin
□ How about making the second two names (2nd and 3rd) in one column as a middle name. How can this be achieved?
None of these work.........you either get a Value! or Name! error message...................
Just don't get in all this mess just simply copy paste the formula given below in formula bar:
=LEFT(A2, SEARCH (" ", A2)-1)
=IFERROR(MID(A2,SEARCH(" ",A2,1)+1,SEARCH(" ",A2,SEARCH(" ",A2,1)+1)-SEARCH(" ",A2,1)),"")
=IFERROR(REPLACE(A2,1,SEARCH("^",SUBSTITUTE(A2," ","^",LEN(A2)-LEN(SUBSTITUTE(A2," ","")))),""),"")
Recommended only on 2010 and above
how i can make that Name As M k Gupta
• Hi Manoj,
We would request you to please provide the exact criteria to meet your expected result. So our experts can help you in achieving the same. 🙂
Thanks in advance!
Team Excel Tip & Excel Forum
Please help. I need to retrieve the middle name with the first character and the last character being in upper case. for instance retrieve "BillY" from broncho billy anderson
• Hi Brian,
We are glad to assist you. Also, we would request you if you have any simple or complicated query, please visit Excel Forum and ask your query to our expert. You will get the solution to your
queries in very less time.
Coming back to your query asked by you above, we assume you have "broncho billy anderson" in cell A2. Enter the following formula in B2 to get the desired result as "BillY". 🙂
=PROPER(MID(A2,SEARCH(" ",A2,1)+1,SEARCH(" ",A2,SEARCH(" ",A2,1)+1)-SEARCH(" ",A2,1)-2))&""&PROPER(MID(A2,SEARCH(" ",A2,SEARCH(" ",A2,1)+1)-1,1))
Let us know if it helps you to meet your requirement.
If you like the solution, we request you to visit our Facebook page and like us. 🙂
Best Regards,
Team Excel Tip & Excel Forum
HELP please: GrahamMaurice - how to obtain two columns = Maurice and the other = Surname = Graham?
Thinking this is simple excel; but I am simple!; best Tony Clemenger;
and I have thousands of them:
CAN HELP???
• HI Tony,
Here is a formula which can help you split First name and Last name wherein the ONLY delimiter is an Uppercase. This should work on major of the names given in your example list apart from the
name like "McGurganJustin" which has 3 uppercase.
Enter the following formula in the designated cells; Considering A1 contains the name: CiciullaSerge
C1 (Last Name) : =MID(A1,MATCH(1,(CODE(MID(A1,ROW($1:$255),1))>=65)*(CODE(MID(A1,ROW($2:$255),1))<90),)+1,255)
PLEASE NOTE: This is an array formula (press CTRL+SHIFT+ENTER)
B1 (First Name) : =SUBSTITUTE(A1,C1,"")
I have a problem extracting compound given names and single names, as well as their middle initial in this order
Larry, Martin Luther S.
Hudson, Mary L.
I used the formula =LEFT(MID(A6,FIND(" ",A6)+1,LEN(A6)),FIND(" ",MID(A6,FIND(" ",A6)+1,LEN(A6)))-1) for first name. OUTPUT was Martin only, I need to add Luther but for Hudson, Mary works just fine,
In the middle initial I used the formula =RIGHT(A23,LEN(A23)-FIND(" ",A23,FIND(" ",A23,FIND(" ",A23)+2))), for Larry, Martin Luther S, output sa Luther S. how can i eliminate Luther but for Hudson,
Mary L works just fine,, what should I do? thank you,
excel 1st sheet a,b.c given 2 sheet full name given but 1 full name then 1 st sheet b number come full name
excel 1st sheet a,b.given 2 sheet full name given but 1 full name
Use Flash Fill, new in Excel 2013, to fill out data based on an example. Flash Fill typically starts working when it recognizes a pattern in your data, and works best when your data has some
For getting last name : use
=MID(A2,LOOKUP(1,--((MID(A2,ROW(INDIRECT("1:" &LEN(A2))),1))=" "),ROW(INDIRECT("1:" &LEN(A2)))),50)
HI please help
i need destination 3 charac in between below mentioned city routing,
plz tell me how to split first and last name if they are combined with any character other than letters. E.G.Avnesh.chaudhary
plz help me..plz
• To extract the last name you can use below mentioned formula
=IF (ISNUMBER (FIND (",",A2)),RIGHT(A2,LEN(A2)-FIND(",",A2)-1),A2)
To extract the first name use this formula =LEFT (A2, SEARCH ("@", A2)-1).
□ hi can any 1 provide excel links where i can learn with examples please i will be very thankful to you 🙂
☆ To extract the name from right you can use below mentioned formula
=IF (ISNUMBER (FIND (“,”,A2)),RIGHT(A2,LEN(A2)-FIND(“,”,A2)-1),A2)
For Example:-
Column A Column B
Names_______________First Names
Bush, George________George
Seinfeld, Jerry_____Jerry
Jordan, Michael_____Michael
To extract the name from left use this formula =LEFT (A2, SEARCH (“@”, A2)-1)
For Example:-
Column A Column B
Names_______________First Names
□ @Nisha Dahawan,
To the best of my knowledge your, suggested, formula does not extract the Last(!) name.
Please recheck.
Michael (Micky) Avidan
“Microsoft® Answers” – Wiki author & Forums Moderator
“Microsoft®” MVP – Excel (2009-2014)
By the Way - if I'm not mistaken the same can be accomplished with the feature: "Text to Columns".
Michael (Micky) Avidan
“Microsoft® Answers” – Wiki author & Forums Moderator
“Microsoft®” MVP – Excel (2009-2014)
In order to extract the last name you may try a shorter formula.
=TRIM(RIGHT(SUBSTITUTE(TRIM(A2)," ",REPT(" ",255)),255))
Michael (Micky) Avidan
“Microsoft® Answers" - Wiki author & Forums Moderator
“Microsoft®” MVP – Excel (2009-2014)
• This is awesome...thank you so much...it compensated for errors from folks with only a first and last name (i.e., not middle name).
□ Thankyou......
Any easiest formula for MID name?
• This formula does not work. I have excel 2010. What am i doing wrong?
• How can extract MID name with this Function Formula "=TRIM(RIGHT(SUBSTITUTE(TRIM(A2)," ",REPT(" ",255)),255))
We can extract FIRST name and LAST name easily with this formula function but middle name not get it. Please help soon.
• Great formula, works great except for surnames with a space like De Franco or Del Rosso.
"Try this to find 'Smith Jr.'
=IF(ISERROR(SEARCH("" "",A2,SEARCH("" "",A2)+1)),RIGHT(A2,LEN(A2)-SEARCH("" "",A2)),RIGHT(A2,LEN(A2)-SEARCH("" "",A2,SEARCH("" "",A2)+1)))"
• Can you help me in splitting the below name this way Mr. David Mark Brown
First Name=Mr. David
Middle Name=Mark
Last Name=Brown
How does one separate out JR's? eg. John W. Smith Jr. | {"url":"https://www.exceltip.com/excel-text/extract-the-first-middle-and-last-name-in-microsoft-excel.html","timestamp":"2024-11-12T22:22:27Z","content_type":"text/html","content_length":"184262","record_id":"<urn:uuid:31a20a3b-e3fa-47f0-a7b8-53cf2f32cb58>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00500.warc.gz"} |
Space-Time Spectral Element Methods in Fluid Dynamics and Materials Science
In this manuscript, we propose space-time spectral element methods to solve problems arising from fluid dynamics and materials science. Many engineering applications require one to solve complex
problems, such as flows containing multi-scale structure in either space or time or both. It is straightforward that high-order methods are always more accurate and efficient than low-order ones for
solving smooth problems. For example, spectral element methods can achieve a given level of accuracy with significantly fewer degrees of freedom compared to methods with algebraic convergence rates,
e.g., finite difference methods. However, when it comes to complex problems, a high order method should be augmented with, e.g., a level set method or an artificial viscosity method, in order to
address the issues caused by either sharp interfaces or shocks in the solution. Complex problems considered in this work are problems with solutions exhibiting multiple scales, i.e., the Stefan
problem, nonlinear hyperbolic problems, and problems with smooth solutions but forces exhibiting disparate temporal scales, such as advection, diffusion and reaction processes. Correspondingly, two
families of space-time spectral element methods are introduced in order to achieve spectral accuracy in both space and time. The first category of space-time methods are the fully implicit space-time
discontinuous Galerkin spectral element methods. In the fully implicit space-time methods, time is treated as an additional dimension, and the model equation is rewritten into a space-time
formulation. The other category of space-time methods are specialized for problems exhibiting multiple time scales: multi-implicit space-time spectral element methods are developed. The method of
lines approach is employed in the multi-implicit space-time methods. The model is first discretized by a discontinuous spectral element method in space, and the resulting ordinary differential
equations are then solved by a new multi-implicit spectral deferred correction method. A novel fully implicit space-time discontinuous Galerkin (DG) spectral element method is presented to solve the
Stefan problem in an Eulerian coordinate system. This method employs a level set procedure to describe the time-evolving interface. To deal with the prior unknown interface, a backward transformation
and a forward transformation are introduced in the space-time mesh. By combining an Eulerian description with a Lagrangian description, the issue of dealing with the implicitly defined arbitrary
shaped space-time elements is avoided. The backward transformation maps the unknown time-varying interface in the fixed frame of reference to a known stationary interface in the moving frame of
reference. In the moving frame of reference, the transformed governing equations, written in the space-time framework, are discretized by a DG spectral element method in each space-time slab. The
forward transformation is used to update the level set function and then to project the solution in each phase onto the new corresponding time-dependent domain. Two options for calculating the
interface velocity are presented, and both options exhibit spectral accuracy. Benchmark tests in one spatial dimension indicate that the method converges with spectral accuracy in both space and time
for the temperature distribution and the interface velocity. The interrelation between the interface position and the temperature makes the Stefan problem a nonlinear problem; a Picard iteration
algorithm is introduced in order to solve the nonlinear algebraic system of equations and it is found that just a few iterations lead to convergence. We also apply the fully implicit space-time DG
spectral element method to solve nonlinear hyperbolic problems. The space-time method is combined with two different approaches for treating problems with discontinuous solutions: (i) space-time
dependent artificial viscosity is introduced in order to capture discontinuities/shocks, and (ii) the sharp discontinuity is tracked with space-time spectral accuracy, as it moves through the grid.
To capture the discontinuity whose location is initially unknown, an artificial viscosity term is strategically introduced, and the amount of artificial viscosity varies in time within a given
space-time slab. It is found that spectral accuracy is recovered everywhere except in the "troublesome element(s)'' where the unresolved steep/sharp gradient exists. When the location of a
discontinuity is initially known, a space-time spectrally accurate tracking method has been developed so that the spectral accuracy of the position of the discontinuity and the solution on either
side of the discontinuity is preserved. A Picard iteration method is employed to handle nonlinear terms. Within each Picard iteration, a linear system of equations is solved, which is derived from
the space-time DG spectral element discretization. Spectral accuracy in both space and time is first demonstrated for the Burgers' equation with a smooth solution. For tests with discontinuities, the
present space-time method enables better accuracy at capturing the shock strength in the element containing shock when higher order polynomials in both space and time are used. Moreover, the spectral
accuracy of the shock speed and location is demonstrated for the solution of the inviscid Burgers' equation obtained by the shock tracking method, and the sensitivity of the number of Picard
iterations to the temporal order is discussed. The dynamics of many physical and biological systems involve two or more processes with a wide difference of characteristic time scales, e.g., problems
with advection, diffusion and reaction processes. The computational cost of solving a coupled nonlinear system of equations is expensive for a fully implicit (i.e., "monolithic") space-time method.
Thus, we develop another type of a space-time spectral element method, which is referred to as the multi-implicit space-time spectral element method. Rather than coupling space and time together, the
method of lines is used to separate the discretization of space and time. The model is first discretized by a discontinuous spectral element method in space and the resulting ordinary differential
equations are then solved by a new multi-implicit spectral deferred correction method. The present multi-implicit spectral deferred correction method treats processes with disparate temporal scales
independently, but couples them iteratively by a series of deferred correction steps. Compared to lower order operator splitting methods, the splitting error in the multi-implicit spectral deferred
correction method is eliminated by exploiting an iterative coupling strategy in the deferred correction procedure. For the spectral element discretization in space, two advective flux reconstructions
are proposed: extended element-wise flux reconstruction and non-extended element-wise flux reconstruction. A low-order I-stable building block time integration scheme is introduced as an explicit
treatment for the hyperbolic terms in order to obtain a stable and efficient building block for the spectrally accurate space-time scheme along with these two advective flux reconstructions. In other
words, we compare the extended element-wise reconstruction with I-stable building block scheme with the non-extended element-wise reconstruction with I-stable building block scheme. Both options
exhibit spectral accuracy in space and time. However, the solutions obtained by extended element-wise flux reconstruction are more accurate than those yielded by non-extended element-wise flux
reconstruction with the same number of degrees of freedom. The spectral convergence in both space and time is demonstrated for advection-diffusion-reaction problems. Two different coupling strategies
in the multi-implicit spectral deferred correction method are also investigated and both options exhibit spectral accuracy in space and time. | {"url":"https://repository.lib.fsu.edu/islandora/object/fsu%3A552114","timestamp":"2024-11-13T17:44:39Z","content_type":"text/html","content_length":"63529","record_id":"<urn:uuid:cb5faf04-d1c1-4fa6-968c-6ed9f15e5937>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00527.warc.gz"} |
****a very naughty law of gravity
Apart from everything else, your sleight of hand gravity man did leave you this quote from his last years. Through our interplanetary technology, the link is now quite clickable."Therefore does this
apple fall perpendicularly or towards the centre? If matter thus draws matter; it must be proportion of its quantity. Therefore the apple draws the Earth, as well as the Earth draws the apple." Sir
Isaac Newton, 1726.
In this latter years pronouncement about falling apples, therefore the apple draws the earth is the main wrong.Beyond this first and main wrong, your next problem is the law itself. A
multiplication of two masses is meaningless. It's not science. It is a little sciencey, though, when you allow one of the masses to be the gravity field of that mass.For all your great professors
oblivious to the apple hypnotism they work under, we look into your rear view mirror of scientific deduction. One of Sir Isaac's masses has been a considered rate of acceleration towards mass due to
a masses inverse square law. And the other mass is mass. And vica versa. When you see this complexity that Sir Isaac built you will also see that his law in full was always unfit for your beautiful
planet. The law........'Every particle in the universe attracts every other particle in the universe with a force directly proportional to the product of their masses and inversely proportional to
the square of their distance apart'
As a tool, your Sir Isaac's second law of motion works. This law is the acceleration of a body is in proportion of the force causing an acceleration. The acceleration is always in the direction of
the force.First we omit the unknown mechanic of how separate masses combine to create one gravity attracting two ways at once. Then, through a presumed force of attraction within any particle that
emanates through the entire universe to every other particle and this second law of motion separately applied to any two particles in the universe, Sir Isaac's mutual formula magically appears for
you. And the magic is twice.The problems are the instances are or would be the effect of force. A momentum change. Whatever a mutual force could have been, each instance was not the cause of
momentum change that it needed to be. Beyond the life of your wondrous seventeenth century hypnotist, this should explain to you how your mathematical physics professors have been making his law
cosmetically coherent. The 'law' is being taken back to the large mass half of its origin when it is set equal to the weight of a small mass as measured by your second law of motion. And the law of
gravity itself just an off beat cause and effect muddle gaining acceptance through a misread constant between one mass, its inverse square law and a rate of acceleration towards that mass. If an
earthling follows your blue italic link below you can see how your own mathematical procedure says to you that the G you have found is not the constant of Sir Isaac's law of gravity. Understandably
this has to be stated extra-terrestrially. It's the constant between one mass, its inverse square law and a rate of acceleration towards that mass. Before they wake up in the morning, your
mathematical physics professors would be well put to understand that the discovered constant is only to do with their larger mass.The smaller mass is cancelled out in your derivation of G and takes
no part in the process beyond being used to measure the magnitude of the inverse square law of the larger mass. E.G. Whichever apple falls from an earth tree, the rate of acceleration is 9.8 m/s/s.
The mass magnitude of the apple that fell is irrelevant to its observed rate of acceleration towards your beautiful earth. Your Galileo was in fact dropping various masses and saying that sort of
thing lots of earth decades before your Sir Isaac Newton was born. Your missing point is the rate of acceleration of m towards M is not dependent on the magnitude of m. Using your method of
derivation, the units you then attach to your G are errantly saying otherwise.
Apart from finding a way of making the magnitude of the small mass relevant, the word "fixed" would have to vanish from the large mass for this to become a determination of a mutual gravitation
constant for you. In its place you would need a measurement of the movement of the large mass. And that measurement appearing somewhere in your determination of your mutual gravitation constant of G.
The thing for an earthling to note is M and m switch from inside to outside the brackets as you go from one instance of Sir Isaac's law of gravity to the other. If this is how the formula has come to
appear on your beautiful planet, you should be able to see that it is just an unedifying confusion of two suspected momentum changes. Not a force of mutual gravitation at all.
Beyond Sir Isaac's life and if you do wish to reduce mutual gravitation 1 and mutual gravitation 2 to one 'drawing' force, you have no defined and presumably two directions of acceleration. At this
point the law is rendered senseless. Any force causes a single direction of acceleration, or potential acceleration in the case of weight, and the direction of the acceleration, or potential
acceleration, and the direction of the force are coincidental.
If you wanted, you could forget the viewing of each 'drawing' force as the other and further investigate your Sir Isaac's two presumed drawing forces. If you go down that path, you would have a
direction of acceleration for each force. These two forces could appear to be an equal and opposite force pair. They do have equal scalar contents. If you can show how M and m combine to form two
mutual forces acting in opposite directions, you may have equal opposite vector magnitudes. If you did manage your mathematical physics inventiveness in that area, the next task would be cooking up
with the mathematics that show how inverse square laws penetrate each other. Or you could forget all that and knuckle down and realize Sir Isaac Newton has neglected opposite directions of fall and
somewhat gleefully hypnotized himself into a mutual falsehood through his own second law.
For whatever it's worth to your planet, the fact that your Sir Isaac endeavoured to explain the tides of your beautiful planet through your moon's inverse square law alone would indicate that he was
still seeing mutual gravitation 1 and mutual gravitation 2 as separate forces during his own lifetime.
And whatever a mutual force could have been, the divergence of mutual magnitudes from 'it must be of proportion of its quantity' always made your law impossible in the product of masses area, anyway.
Example M + m = 10
Graphically and according to your junior schoolbook earth - moon dynamics, when you materialize some of your earthlings upon your moon to look back at yourselves, there is greater mutual attraction
between your planet and its moon than when your same earthlings are materialized back on your planet. Or, if your joint centre of gravity extension is allowed into your hypnotized mutual gravitation
considerations, a moving of mass from M to m alters the magnitude of that would be joint centre. Same amount of mass, greater or lesser mutual gravitation depending on which way the mass has been
transferred. If that phenomena is a truth, your wonderful professors will explain how so to you from their lecterns.
However your Sir Isaac derived his naughty law of mutual gravitation for the review that your sleeping earth academics understandably didn't even provide yet, he began with a fundamental flaw.
At the beginning of his gravity thought process, he has deduced that all masses have gravity. That was his universal summation of the gravity of the universe.
His mutual idea extends this universal concept into therefore all masses attract all other masses. Sir Isaac's fundamental flaw is his extension of a universal nature of gravity into this
unconsidered mutual nature of gravity. And the flaw is as simple as not recognizing or placing vector values on the opposite directions of falling in between the centres of any two particles in the
What you have done since Sir Isaac's hour of direction of fall neglect is expect all inverse square laws to be perfect unhindered arithmetical extensions from their source particle to all other
particles in the universe.
Contrary to this, your modern calculations work out where your space shot will leave your earth's inverse square law and enter another. This demonstrates or proves to you that inverse square laws
are not perfect vector extensions away from a particular particle of the universe to all other particles in the universe.
On these fine but sleeping university campuses of yours you are presuming these extensions without provided analytical reasoning. For your law to become an original thinkable possibility, you would
be required to present rational argument of how a direction of fall rebuilds from termination into further extension. Or, on the hot from your neighbouring planet diagram below, show how your moon's
inverse square law rebuilds from zero to reach your beautiful planet and then reach other particles beyond the far side of your beautiful planet.
On your planet's side of the termination point, anything dropped is not observed to fall towards your moon. The idea that it falls towards your moon a little bit is where your earth scholars need to
wake up and slip into normal solar system intelligence. Objects never simultaneously accelerate in opposite directions. Whatever a direction of acceleration is, it is that direction alone. And it is
not only us Martians saying that upon our materialization here on your beautiful planet. Yes. You say it to yourselves through your very own Sir Isaac Newton's second law.
Acceleratively your space shots do tell you that the direction of acceleration between your beautiful earth and its moon alternates. Because opposing real values are used to find a zero 'value', the
precise location of the zero point is only ever approximately calculable. While the zero acceleration point is never an exactly calculable location, through your space shots, the change in direction
of acceleration in between your earth and its moon is earth knowable.
According to your Sir Isaac's law of mutual gravitation and tidal analysis, though, both inverse square laws are arithmetically coexisting intact at the change of direction location. If placed at
this point, a droplet of ocean would accelerate at a rate of 0.0033 m/s/s towards the moon. The same drop would also accelerate at the same rate towards your beautiful earth and at the same time.
Your sleeping earth professors really do have a cake and eat it fantasy law of gravity program embedded deep into their daydreams.
If you start to become acquainted with the man your Sir Isaac Newton was, it becomes apparent that he was as much a determined peer conqueror as scientist. "If I have seen further than others, I
have stood on the shoulders of giants" he wrote when he had planetary motion done and dusted. And his peers cheered and he was a happy man.
From the visiting Martian point of view, he may not have been entirely settled after his imaginary shoulder standing exercise. Not understanding his universal law of gravitation himself was a
problem. What he needed was incontrovertible proof of every particle in the universe attracting every other particle in the universe. Sometime after the publication of his law of gravity, your
records show that he became of parliament. Statutory law making was not his business, though. His only known earth words in parliament were to do with an open earth window changing atmospheric
conditions inside an earth building. As he silently sat, his mind looked for ways to understand his mutual gravitation. One day the parliamentary earth Bible took his fancy. To make simple sense of
his formula to himself, a falling apple story dreamily structured into his mind. As it did, new earth apple folklore began in earnest.
You should understand that an errant formula about the universe indicates that your apple story is rhetoric made to convince at least Sir Isaac Newton that he had found the law of gravity that
explains the universe. A chance occurrence of seeing an earth apple fall to errantly explain the gravity of the universe should say to you that your Sir Isaac Newton has invented a story.
Realistically, if you are into the study of falling objects, you do what one of Sir Isaac's giants did. You drop things. You don't need to wait for the fruit to fall from the apple tree.
Your most addled succeeding theory is Sir Isaac Newton's giants being weight pushing up on Sir Isaac's shoes as he stood upon their shoulders. This weight is not measurable, detectable or necessary
sleeping academic presumption. The equal and opposite force to one side of your beautiful planet is the weight of the direct opposite side of your beautiful planet. Both Sir Isaac and his giants are
part of the weight of one side of your planet. From our Martian point of view, teaching your school children that the shoulders of giants push up onto Sir Isaac's shoes is your scholarship having an
unfortunate and unrelenting nightmare. Your sleeping professors really are educationally deficient.
When the terminations of your earth and moon inverse square laws on the earth - moon axis are recognized, to an earth school child your Sir Isaac's mutual summary of the gravity of the universe
should be no more.
Of the greater significance for you is, when the terminal point on the earth - moon axis is acknowledged, you have your honest moment of entering a tidal clarity. Whereas your Sir Isaac originally
tried to explain your high tides through relative lunar inverse square law magnitudes on the earth moon - axis, your answer lies in relative earth inverse square law magnitudes between that axis and
the axis at right angles to that axis. The one that goes through the centre of your beautiful planet.
Once your current sleeping academics start waking up and seeing the high tide under your moon through lateral relative terrestrial inverse square law magnitudes (and not any gravitation towards your
moon whatsoever), through your future educational standards, things should get enlightened here on your planet.
Parallel, you can easily see both sets of tides as the equal and opposite downward forces of your beautiful planet as those downward forces appear up here on the surface of your beautiful planet.
For your earth schoolchildren, if Sir Isaac's equal and opposite force axiom is a true science and if using Sir Isaac's second law of motion to measure weight is also a true science, the tidal status
of one side of your earth being an ongoing equal and opposite reaction to the one on the direct opposite side of the core of your earth cannot be wrong. As your beautiful earth turns through its
inverse square law, direct opposite sides of your earth alter in force magnitude in accordance with your own Sir Isaac's second and third laws.
From one of your Sir Isaac's books of one and half Martian centuries ago, a mutual gravitation declaration in terms of what we (even though your Sir Isaac's penchant for having his portrait painted
would indicate he was seeing himself as an historical figure, 'we' would mean not you but Sir Isaac's peers of the time) must do. Upon our materialization here on your beautiful earth, we added some
pretty colours to Sir Isaac's command.....
Lastly, if it universally appears, by experiments and astronomical observations, that all bodies about the earth, gravitate toward the earth; and that in proportion to the quantity of matter which
they severally contain; that the moon likewise, according to the quantity of its matter, gravitates toward the earth; that on the other hand our sea gravitates toward the moon; and all the planets
mutually one toward another; and the comets in like manner towards the sun; we must, in consequence of this rule, universally allow, that all bodies whatsoever are endowed with a principle of mutual
The declaration of what his peers must do was necessarily vague. By beginning in hope of proof, your Sir Isaac has known he wasn't necessarily dealing in a complete sense. This admission of
uncertainty is his honest message to his peers.
They severally contain is addition. Not the product of quantities that appears in your actual law. There isn't evidence to suggest a scientific frame of earth mind when your Sir Isaac declared mass
multiplied by mass as the gravity base of the universe. His end of life words of in proportion of its quantity is where he had his head in a happier state.
The moon gravitating towards the earth according to the quantity of its matter is in stark contradiction of your moon gravitates towards your earth in accordance with the product of earth and moon
quantities (the M x m of your or his actual law).
Words highlighted in vermillion are where your main problem of academic conscience and character resides. Your sea (under your moon) is not an astronomical observation of a gravitation towards your
moon in the slightest. It is impossible for an ocean or anything at all to simultaneously gravitate in opposite directions.
The reality is your sea (under your moon) only gravitates towards your earth. This gravitation is less because of the interruption caused by your moon's inverse square law to your earth's inverse
square law. A lesser weighting/gravitating in one direction is not a weighting/gravitating in an opposite direction. Once that is understood by your earth schoolchildren, Sir Isaac's command of what
"we" must universally allow is by the bye. Conversely, your earth schoolchildren could all just heave a sigh of earth relief and start your education going properly. Because of your Sir Isaac's
hypnotic apple story, you are stuffing your adult scholarship up with a scantily thought about axiom that has two inverse square laws becoming one and then having the wherewithal to act in two
directions at once.
The difference between gravitates towards your beautiful earth less and your fictitious gravitates towards your moon a little bit is significant with respect of further understandings of inverse
square laws. And the high tide on the far side of your beautiful planet definitely is not a gravitation towards your moon. If it was a gravitation to do with your moon in the slightest, it would be a
gravitation away from your moon.
When an earth professor wakes up, Sir Isaac's use of the word endowed should indicate that Sir Isaac's law of gravity at best was incomplete. Endowed leaves the reasons of how and why physically
unexplained and also adrift from a mathematical foundation.
To tune in with the inner dynamics of this beautiful almost round planet of yours, squeezing party balloons may help your earth adolescents or any still hypnotized mathematical physics professor or
any one of you at all. The analogy is imperfect. But it does place a view of equal and opposite downward earth forces in your hands as your earth vision takes in the changing shape of the earth party
On our planet, Mars, we have deduced this. The core of a planet or star is weightless.
The evidence supporting weightless cores for you is the inverse square law of the beautiful planet you are on is measured as diminishing in your underground ore extraction exercises. Meaning not
only do inverse square laws conclude / are interrupted where they meet an adjacent one. Inverse square laws break down in side the celestial body that they are causing a descent towards. Going down
from the surface of your beautiful earth, you would expect the rate of fall to taper down to zero across its centre. From there you can deduce for yourselves whether or not there could be such things
as what your hypnotized mathematical scientists are calling gravity holes in space.
What you do need to understand is this. It's the surface weight that an inverse square law causes that holds a planet to its centre. Not the inverse square law getting bigger and bigger beneath the
surface of a planet. And after that and with some consideration of your earthquakes, the implication is that a planet's inverse square law is set at the planet's surface. More precisely at the point
where the rate of acceleration towards a planet's centre starts to decrease would be the point where an inverse square law becomes of a reversing arithmetical structure to that of an actual inverse
square law. If you can get on top of that, the next step is considerations of how mass actually fixes an inverse square law in the space that surrounds mass. An inverse square law in space says the
nature of space is changing with vertical distance. If they can get unhypnotized, post earth stone age tidal thinking should be ahead for all your beautiful earth professors.
If cleansed just a little bit, from here the waking up big minds of earth professors could take the inverse square law over in proper style. As an ongoing Martian reaction to your heavily hypnotized
educational movers and shakers, the next reluctant web page is highly positive about life on earth. But quickly descends further into Martian. Your earth kids might enjoy it, though. The Venetians
say it's a brand new earth whoopee cushion. If the page is needed at all here on your beautiful planet, there may have been other or better ways of handling this unusual situation of a naughty law of
gravity on a beautiful neighbouring planet. We haven't fully comprehended what your societal mores actually are yet. But amusement about unexpected flatulence seems to be well in the mix.
Incidentally, your modern evidence would show the proportion your Sir Isaac was assuming was between the product of quantity and surface area. Not simply quantity. For example, you now know your
moon has 1/6 of the surface gravity of your earth. Whether you accurately know your moon's mass or not, it has about 1/50 of the volume of your earth. You believe your moon has about 1/81 of the mass
of your beautiful earth.[(earth's surface area x 9.8)/(moon's surface area x 1.6 = 82.4)] Whether or not your assessed celestial masses are reliable is your unknown.
Admittedly it isn't really for strangers to earth to say terminal and mirror acceleration magnitudes is where your earth university standard should currently be at if you want your earth school
teachers to stop being ongoing apple hypnotists. Your earth professors should be the ones saying that. But, as they have all been hypnotized into the mind of Sir Isaac Newton, aliens obviously feel a
moral obligation to a neighbour trying to destroy its own beauty through this very rare case of falling apple hypothermia. Or whatever it is in precise earth medical terms. | {"url":"https://www.whyvenusturnsbackwards.com/a-very-naughty-law-of-gravity.html","timestamp":"2024-11-14T16:51:50Z","content_type":"text/html","content_length":"65316","record_id":"<urn:uuid:04b22e86-32f6-4ea1-aff4-ff300dce3489>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00012.warc.gz"} |
Definition of subatomic projection of funcoids - Math Research of Victor PortonDefinition of subatomic projection of funcoids
I have proved that for every funcoid $latex f:\prod A\rightarrow\prod B$ (where $latex A$ and $latex B$ are indexed families of sets) there exists a funcoid $latex \mathrm{Pr}^{(A)}_k f$ (subatomic
projection) defined by the formula:
$latex \mathcal{X} \mathrel{\left[ \Pr^{\left( A \right)}_k f \right]} \mathcal{Y}
\Leftrightarrow \\
\prod^{\mathsf{RLD}}_{i \in \mathrm{dom}\, A}
\left( \left\{ \begin{array}{ll}
1^{\mathfrak{F} \left( A_i \right)} & \mathrm{if}\,
i \neq k ;\\
\mathcal{X} & \mathrm{if}\, i = k
\end{array} \right. \right) \mathrel{\left[ f \right]}
\prod^{\mathsf{RLD}}_{i \in \mathrm{dom}\, B} \left( \left\{
1^{\mathfrak{F} \left( B_i \right)} & \mathrm{if}\,
i \neq k ;\\
\mathcal{Y} & \mathrm{if}\, i = k
\end{array} \right. \right) . $
My draft book is modified to include this new theorem. | {"url":"https://math.portonvictor.org/2013/04/20/subatomic-projection/","timestamp":"2024-11-10T16:12:02Z","content_type":"text/html","content_length":"98195","record_id":"<urn:uuid:e3eaa8d4-30e5-4cae-a9a8-ae1f52a9fcc4>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00317.warc.gz"} |
Scatter Plot Graph
ConceptDraw DIAGRAM extended with Scatter Diagrams solution is the best diagramming and vector drawing software for quick and easy designing the Scatter Plot Graph of any complexity.
Scatter Diagrams solution is a part of Graphs and Charts Area which includes also a variety other solutions for simple and fast drawing professional looking Bar Charts, Line Graphs, Pie Charts, Area
Charts, Picture Graphs, Histograms, etc.
Example 1. Scatter Plot Graph - Changes in Top Income Share and Top Marginal Tax Rate
This example was created in ConceptDraw DIAGRAM using the tools of Scatter Diagrams solution. It shows a Scatter Plot Graph which illustrates the changes in top income share and marginal tax rate. An
experienced user spent 20 minutes creating this example. Applying of this professional looking, visual and clear Scatter Plot Graph example will have a great success.
Use Scatter Diagrams solution for designing your own Scatter Plot Graph quick, easy and effective.
All source documents are vector graphic documents. They are available for reviewing, modifying, or converting to a variety of formats (PDF file, MS PowerPoint, MS Visio, and many other graphic
formats) from the ConceptDraw STORE. The Scatter Diagrams Solution is available for all ConceptDraw DIAGRAM or later users.
See also Samples:
THREE RELATED HOW TO's:
This sample was created in ConceptDraw DIAGRAM diagramming and vector drawing software using the Bubble Diagrams Solution from the "Diagrams" area of ConceptDraw Solution Park. This sample clearly
shows the Four Dimensions Bubble Diagram of the distribution of chlorine contaminant in the water source. This Bubble Diagram is very useful in the chemistry, hydrology, and ecology.
Picture: Four Dimensions Bubble Plot
Related Solution:
You've got to draw the Scatter Graph and look for the convenient tool which will help you? Direct your attention on the ConceptDraw DIAGRAM diagramming and vector drawing software extended with
Scatter Diagrams Solution from the Graphs and Charts Area.
Picture: Scatter Graph
Related Solution:
This sample shows the Business Report Pie Chart. The Pie Chart visualizes the data as the proportional parts of a whole, illustrates the numerical proportion. Pie Charts are very useful in the
business, statistics, analytics, mass media.
Picture: Business Report Pie. Pie Chart Examples
Related Solution: | {"url":"https://www.conceptdraw.com/How-To-Guide/scatter-plot-graph","timestamp":"2024-11-09T04:52:11Z","content_type":"text/html","content_length":"50046","record_id":"<urn:uuid:e44dc8ad-7c5a-459f-b815-b57bb1a5e16c>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00622.warc.gz"} |
수학강연회 - Hybrid discontinuous Galerkin methods in computational science and engineering
Computation facilitates to understand phenomena and processes from science and engineering; we no longer need to depend only on theory and experiment. Computational Science and Engineering (CSE) is a
rapidly developing multidisciplinary area using computational mathematics in the fields of science and engineering. CSE focuses on modeling-computer simulation-visualization, based on applied
mathematics. We aim to provide problem-solving methodologies and robust tools for numerical simulation.
In this talk, we present our recent efforts for developing a robust numerical scheme for various problems including the Darcy and the Navier-Stokes equations. Hybrid discontinuous Galerin methods
(HDG) was first designed and proposed by Y. Jeon and myself to study the Darcy equation. We further develop the method and provide arbitrary-order, locally conservative, stabilized formulation for
Navier-Stokes problems. Several numerical results are presented to test the performance of the algorithm and to validate the theory developed. For stationary incompressible Navier-Stokes equations,
numerical results for the lid-driven cavity problem are presented with Reynolds numbers up to 21000, and compared with existing results. This is a joint work with Y. Jeon and D. Shin. | {"url":"http://my.math.snu.ac.kr/board/index.php?mid=colloquia&page=6&sort_index=speaker&order_type=asc&document_srl=768331&l=ko","timestamp":"2024-11-02T01:50:30Z","content_type":"text/html","content_length":"44283","record_id":"<urn:uuid:61b117b7-1242-4633-98f2-d304cc0c2bf9>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00700.warc.gz"} |
The Stacks project
Remark 60.13.1. In Situation 60.7.5. Let $(U, T, \delta )$ be an object of $\text{Cris}(X/S)$. Write $\Omega _{T/S, \delta } = (\Omega _{X/S})_ T$, see Lemma 60.12.3. We explicitly describe a first
order thickening $T'$ of $T$. Namely, set
\[ \mathcal{O}_{T'} = \mathcal{O}_ T \oplus \Omega _{T/S, \delta } \]
with algebra structure such that $\Omega _{T/S, \delta }$ is an ideal of square zero. Let $\mathcal{J} \subset \mathcal{O}_ T$ be the ideal sheaf of the closed immersion $U \to T$. Set $\mathcal{J}'
= \mathcal{J} \oplus \Omega _{T/S, \delta }$. Define a divided power structure on $\mathcal{J}'$ by setting
\[ \delta _ n'(f, \omega ) = (\delta _ n(f), \delta _{n - 1}(f)\omega ), \]
see Lemma 60.3.1. There are two ring maps
\[ p_0, p_1 : \mathcal{O}_ T \to \mathcal{O}_{T'} \]
The first is given by $f \mapsto (f, 0)$ and the second by $f \mapsto (f, \text{d}_{T/S, \delta }f)$. Note that both are compatible with the divided power structures on $\mathcal{J}$ and $\mathcal{J}
'$ and so is the quotient map $\mathcal{O}_{T'} \to \mathcal{O}_ T$. Thus we get an object $(U, T', \delta ')$ of $\text{Cris}(X/S)$ and a commutative diagram
\[ \xymatrix{ & T \ar[ld]_{\text{id}} \ar[d]^ i \ar[rd]^{\text{id}} \\ T & T' \ar[l]_{p_0} \ar[r]^{p_1} & T } \]
of $\text{Cris}(X/S)$ such that $i$ is a first order thickening whose ideal sheaf is identified with $\Omega _{T/S, \delta }$ and such that $p_1 - p_0 : \mathcal{O}_ T \to \mathcal{O}_{T'}$ is
identified with the universal derivation $\text{d}_{T/S, \delta }$ composed with the inclusion $\Omega _{T/S, \delta } \to \mathcal{O}_{T'}$.
Comments (0)
There are also:
• 2 comment(s) on Section 60.13: Two universal thickenings
Post a comment
Your email address will not be published. Required fields are marked.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
All contributions are licensed under the GNU Free Documentation License.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder,
this is tag 07J2. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 07J2, in case you are confused. | {"url":"https://stacks.math.columbia.edu/tag/07J2","timestamp":"2024-11-08T07:59:27Z","content_type":"text/html","content_length":"15640","record_id":"<urn:uuid:694761f3-5425-41c1-a904-f3dc8c27d922>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00322.warc.gz"} |
Blog: Logs for the MRL Meeting Held on 2019-11-12
Logs for the MRL Meeting Held on 2019-11-12
November 12, 2019
<sarang> Hello all
<sarang> Meeting begins presently
<sgp_> hello
<sarang> First, GREETINGS
<sgp_> darn, beat you to it :)
<sarang> Next up, ROUNDTABLE
<suraeNoether> howdy :D
<suraeNoether> sarang how about you go first
<sarang> I backported the RCT3 exploit fix from the multi-input aggregated proving system to the single-input prover, updated code to reflect this, and checked the relevant security proofs for this
<sarang> Also went through some math on ways to support multisignatures securely on the sublinear protocols under consideration, with no good answers
<sarang> As of now, the constructions for RCT3, Omniring, and Triptych all require an output secret key inversion, which is incompatible with the linear-combination method used for doing
<sarang> This only comes up for RCT3 and Triptych in the key image generation step, but it's still unfortunate
<suraeNoether> i was reading about some signature schemes that don't use hash functions and do use key inversion the other day… when i review your stuff on triptych soundness later today, i'll see if
there is anythingt obvious that jumps out at me
<sarang> Thanks
<suraeNoether> i would assume it's not obvious because you and randomrun are both v diligent
<sarang> A few things are still in progress too
<sarang> Regardless of multisignature support, I'm working up a preprint on Triptych for IACR eprint
<sarang> It'll either be the provably-secure single-signer version, or the multi-signer version if the soundness argument works out
<sarang> It'll include the same PRF-based key image construction as found in RCT3 and Omniring, since that's much more efficient than one based on hashing public keys
<sarang> Now passing the baton to suraeNoether
<suraeNoether> cool, so for our traceability analysis, i'm collecting data now. i presented some preliminary results from a *single* simulation yesterday that were rather promising, but can't be
generalized yet
<sarang> Is the code for those results currently pushed to your repo?
<suraeNoether> indeed, all you do is run playerchallenger.py with python3 and simulation results will be spit out
<sarang> Can you provide the branch and commit for that version, to be sure we're running the same code?
<suraeNoether> of course, the results will vary from simulation to simulation so the precise numbers i provided yesterday will change from simulation to simulation
<sarang> right
<suraeNoether> https://github.com/b-g-goodell/mrl-skunkworks/tree/matching-buttercup/Matching is the present up to date everything
<sarang> Got it, thanks
<suraeNoether> the code isn't complete in a lot of ways (there are always ways to make the weighting scheme selected by Eve to be better and to take into account more data), but it's complete enough
to start doing some data analysis to get some hard numbers on churn
<sgp_> How can I configure it to test with different churn parameters?
<suraeNoether> i am currently modifying the code to specifically investigate churn, which requires some changes to the very front end of my simulations; i don't expect it to be done today
<suraeNoether> ^ heh
<sgp_> ok, so stay tuned
<sarang> I'm seeing commit d5076 as most recent
<suraeNoether> indeedindeed, d5076 is most common
<sarang> roger, thanks
<suraeNoether> i think MRL-churn-numbers will have some more satisfying answers later this week
<suraeNoether> other than that: catching up on the RCT, omni, and triptych work that sarang has been doing
<suraeNoether> that's all i have. OH my work report and stuff like that :P
<scoobybejesus> fingers crossed my question makes sense
< could the logistics be made to work such that signing a tx still happens via linear-combination but key image is derivable independently by multisig members?
<sarang> We'd need a secure MPC for the function J(x) = (1/x)*U
<sarang> If there is such a thing, it's all ok
<sarang> (here U is a globally fixed curve group generator)
<sarang> I'm trying to find the paper where that particular PRF was first introduced
<sarang> Cited from Omniring (reference 20): https://www.iacr.org/cryptodb/archive/2005/PKC/3320/3320.pdf
<sarang> Anything else to share or discuss from anybody?
<sarang> Besides me both liking and disliking that particular pseudorandom function =p
<suraeNoether> Just compute the logarithm then add ;)
<sarang> But for real, an MPC for that function based on linear combinations would solve the multisig problem AFAICT
<sarang> It may not be possible while retaining its nice PRF properties
<sarang> (perhaps there's a formal argument that such an MPC couldn't exist)
<suraeNoether> Slightly more seriously: why not compute the inverse of the product of the private keys and instead of a partial sum on the basepoint being passed around, a partial product on the
basepoint is passed around…
<suraeNoether> If I have x and you have y, let the combined key be 1/(xy) U
<sarang> If you take away the affine nature of the composite key, I don't see a way to make that work cleanly with the rest of the proof
<sarang> nor do I immediately see what partial key image data would be passed around to construct the full image
<suraeNoether> I send you (1/x)U. You multiply by 1/y…
<suraeNoether> So it's not just an mpc you need but specifically a linear function of the input keys…
<sarang> Perhaps. I was thinking in the context of linear key combinations initially (since the rest of the proofs play nicely with that)
<sarang> The current multisig works nicely because everything plays nicely with linear combinations
<sarang> But point taken that if it were possible to relax that restriction, it could be quite compelling
<sarang> The only point in Triptych and RCT3 that requires the use of a full private key (outside of key image construction) is at the end of the proofs, to construct a particular masked scalar
<suraeNoether> Hmm
<suraeNoether> Ok well I am going to continue reading on triptych and I will try to make a push later today to mess with churn number in my simulations for sgp_
<sarang> Righto
<sarang> Let me know if you have any additional thoughts on multisig/MPC constructions too
<sarang> Oh, and here's a neat paper that came out recently for doing composable zk proofs in a Python library: https://arxiv.org/abs/1911.02459
<sgp_> thanks suraeNoether. I'm interested in testing your model out
<sarang> So perhaps on to ACTION ITEMS now?
<sarang> Mine are to wrap up the Triptych formalization to a preprint as much as possible, while considering options for secure MPC
<sarang> and then I want to do a deeper review of suraeNoether's recent work on his simulation code
<sarang> (and clear a backlog of lit review)
<suraeNoether> My big backlog for matching at this point is commenting it and documenting how it works for anyone who wants to pick it up
<suraeNoether> But other than that, I am reading today and doing my work reoorts
Post tags : Dev Diaries, Cryptography, Monero Research Lab | {"url":"https://www.getmonero.org/2019/11/12/mrl-meeting.html","timestamp":"2024-11-08T20:47:21Z","content_type":"text/html","content_length":"39101","record_id":"<urn:uuid:49935b35-39c1-4ee4-a8e9-39f15ad523b1>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00308.warc.gz"} |
Fast constant-time gcd computation and modular inversion
Niels Möller nisse at lysator.liu.se
Sun May 29 21:28:41 CEST 2022
Albin Ahlbäck <albin.ahlback at gmail.com> writes:
> Have you looked at https://eprint.iacr.org/2020/972.pdf, where the
> author seems to suggests an even faster algorithm? Or at least it was
> faster under heavy optimization under the assumption of what inputs
> the algorithm recieved.
No, I wasn't ware of that. I've now had a quick look (algorithm 2, page
6, seems to be the main part).
It's pretty close to standard extended binary, and like gmp's
mpn_sec_invert, a single innerloop iteration is one conditional
subtraction of the smaller number from the larger and right shift of a
single bit.
The trick (which is new to me) is to reduce k-1 bits by taking the least
significant k-1 bits *and* most significant approx k+1 bits of both
numbers, and then the innerloop operates only on these smaller nubmers,
ignoring the middle bits. The iteration steps are collected into a
transformation matrix.
And then have an outerloop applying that transformation matrix to the
complete numbers and the cofactors.
I haven't read the correctness argument, but there seems to be some
subtle issues: At line 3,
n = max(len(a), len(b), 2k)
makes n data dependant, which in turn makes side-channel silent
extraction of top bits, floor (a / 2^{n-k-1}) a bit tricky, since the
way these bits straddles a boundaries becomes data dependent.
And the need for the conditional negations (lines 18 -- 21) seems
related to rounding errors from ignored middle bits.
Niels Möller. PGP key CB4962D070D77D7FCB8BA36271D8F1FF368C6677.
Internet email is subject to wholesale government surveillance.
More information about the gmp-devel mailing list | {"url":"https://gmplib.org/list-archives/gmp-devel/2022-May/006122.html","timestamp":"2024-11-15T03:16:35Z","content_type":"text/html","content_length":"4618","record_id":"<urn:uuid:595e022c-2ddd-4ddb-a554-98451d46b42b>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00580.warc.gz"} |
Linear Casual Modeling With Structural Equations
search for books and compare prices
Linear Casual Modeling With Structural Equations
5–14 days
5–14 days
Amazon.com (Used)
5–14 days
5–14 days
The price is the lowest for any condition, which may be new or used; other conditions may also be available.
Jump down to see edition details for:
Bibliographic Detail
Publisher Chapman & Hall
Publication date June 5, 2009
Binding Hardcover
Edition 1
Book category Adult Non-Fiction
ISBN-13 9781439800386
ISBN-10 1439800383
Dimensions 1.25 by 6.50 by 9.50 in.
Weight 1.75 lbs.
Original list price $94.95
Other format details sci/tech
Summaries and Reviews
Amazon.com description: Product Description
Emphasizing causation as a functional relationship between variables that describe objects, Linear Causal Modeling with Structural Equations integrates a general philosophical theory of causation
with structural equation modeling (SEM) that concerns the special case of linear causal relations. In addition to describing how the functional relation concept may be generalized to treat
probabilistic causation, the book reviews historical treatments of causation and explores recent developments in experimental psychology on studies of the perception of causation. It looks at how to
perceive causal relations directly by perceiving quantities in magnitudes and motions of causes that are conserved in the effects of causal exchanges.
The author surveys the basic concepts of graph theory useful in the formulation of structural models. Focusing on SEM, he shows how to write a set of structural equations corresponding to the path
diagram, describes two ways of computing variances and covariances of variables in a structural equation model, and introduces matrix equations for the general structural equation model. The text
then discusses the problem of identifying a model, parameter estimation, issues involved in designing structural equation models, the application of confirmatory factor analysis, equivalent models,
the use of instrumental variables to resolve issues of causal direction and mediated causation, longitudinal modeling, and nonrecursive models with loops. It also evaluates models on several
dimensions and examines the polychoric and polyserial correlation coefficients and their derivation.
Covering the fundamentals of algebra and the history of causality, this book provides a solid understanding of causation, linear causal modeling, and SEM. It takes readers through the process of
identifying, estimating, analyzing, and evaluating a range of models.
The price comparison is for this edition
1 edition from Chapman & Hall (June 5, 2009)
details & prices
| 444 pages | 6.50 × 9.50 × 1.25 in. | 1.75 lbs | List price $94.95
Emphasizing causation as a functional relationship between variables that describe objects, Linear Causal Modeling with Structural Equations integrates a general philosophical theory of causation
with structural equation modeling (SEM) that concerns the special case of linear causal relations.
Pricing is shown for items sent to or within the U.S., excluding shipping and tax. Please consult the store to determine exact fees. No warranties are made express or implied about the accuracy,
timeliness, merit, or value of the information provided. Information subject to change without notice. isbn.nu is not a bookseller, just an information source. | {"url":"https://isbn.nu/9781439800386","timestamp":"2024-11-07T03:26:30Z","content_type":"text/html","content_length":"17780","record_id":"<urn:uuid:63a9f078-5c27-4a41-9fea-bc5bc3ae351b>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00196.warc.gz"} |
Empirical derivation of gas equation
Many important relationships describe the characteristics of the gas samples that have been entirely empirically derived i.e, rather than attempting to define the hypothetical reason in which these
relationships might exist, it means they are only based on observations. So, these are the empirical gas laws.
There are numerous methods to derive the Empirical Ideal Gas Law, however, the easiest method is to utilize the three simplistic gas laws.
AVOGADRO’S LAW asserts that the gas volume is directly proportional to the no. of moles.
V ∝ n
BOYLE’S LAW asserts that the gas volume is inversely proportional to its pressure.
V ∝ 1/P
CHARLES’S LAW says that the gas volume
is directly proportional to its absolute temperature.
V ∝ T
In detail:
Boyle’s Law
One of the most significant relationships that rule the gas samples and which can be mathematically modeled is volume and pressure relationship. The law showcases that for a fixed gas sample at
constant T, volume and pressure are inversely proportional to each other.
Charles’ Law
As per Charles’ Law, it states that a fixed gas sample volume is proportional to a temperature at constant pressure. Charles’ law works in the condition since there’s specifically the absolute
minimum in the scale of volume, so there should be the absolute minimum in the scale of temperature!
VT = constant
Or, V1T2 =V1T2
Even the second law of thermodynamics estimates the absolute minimum temperature.
Gay-Lussac’s Law
With the help of Charles’ Law, it suggests that the absolute minimum exists at the scale of the temperature because pressure can never be (-ve) negative.
pT = constant
Or, p1T2 = p1T2
Combined Gas Law
When the gas laws, Charles’law, Boyle’s law, and Gay-Lussac’s Law are combined into one single empirical formula, which is useful. For the given quantity of gas, the relationship should be:
p1V1T1 = p2V2T2
Avogadro’s Law
In the statement of Avogadro’s Law, it defines that at the same pressure and temperature, any gas sample has the same molecules’ number per unit volume.
Or, n1V1 = n2V2
From empirical relationships found amongst temperature, volume, pressure, and the number of moles present in a gas, the ideal gas law is being derived. Also, it can be used in calculating any of the
4 properties if 3 out of them are known to you.
Ideal gas equation:
R=0.08206 L⋅atm/K⋅mol = 8.3145 J/K⋅mol
General gas equation: PiViniTi = PfVfnfTf
The density of a gas: ρ = MPRT
Empirical relations between temperature, volume, pressure, and the gas quantity can be joined into ideal gas law, which is PV = nRT.
R, the proportionality constant, is known as the gas constant and consist of 0.08206 (L•atm)/(K•mol), or 1.9872 cal/(K•mol), or 8.3145 J/(K•mol) value, which depends on the how units are used.
The behavior of an ideal gas is being described by the ideal gas law which is a theoretical or assumed substance whose behavior is known to be quantitatively described by gas’s Kinetic Molecular
Theory and the ideal gas law. STP (Standard Temperature and Pressure) is 0-degree Celsius and 1-atm.
Ideal gas containing 1 mol of volume at Standard Temperature and Pressure of 22.4 L is the “standard molar volume.” All empirical relationships of gases are ideal gas law’s special cases, in which 2
out of 4 parameters are kept constant.
Ideal gas law allows the calculation of the fourth quantity value (P, V, T, or n), when a gaseous sample is required to be described, when you can predict the values of P, V, T, or n quantities and
others are known, following conditional changes if P, V, T, and n values are known. Also, ideal gas law can be used for the calculation of the gas density only if its molar mass is known to you, or
vice-versa can calculate the molar mass of a not known sample of gas if its density is calculated. | {"url":"https://www.w3schools.blog/empirical-derivation-of-gas-equation","timestamp":"2024-11-04T20:31:35Z","content_type":"text/html","content_length":"170851","record_id":"<urn:uuid:c5591265-092d-4b9a-a00f-7c07717c2402>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00594.warc.gz"} |
VARANDOM procedure • Genstat Knowledge Base 2024
Finds the best REML random model from a set of models defined by VFMODEL (R.W. Payne).
PRINT = string Controls what summary output is produced about the models (deviance, aic, bic, sic, dffixed, dfrandom, change, exit, best); default devi, aic, sic, dfra, best
PBEST = string Controls the output from the REML analysis with the best model (model, components, effects, means, stratumvariances, monitoring, vcovariance, deviance, Waldtests, missingvalues,
tokens covariancemodels, aic, sic, bic); default * i.e. none
PTRY = string Controls the output to present to present from the REML analysis used to try each model (model, components, effects, means, stratumvariances, monitoring, vcovariance, deviance,
tokens Waldtests, missingvalues, covariancemodels, aic, sic, bic); default * i.e. none
MODELSTRUCTURES = Model-definition structures specifying the models to try
PTERMS = formula Terms (fixed or random) for which effects or means are to be printed; default * implies all the fixed terms
PSE = string token Standard errors to be printed with tables of effects and means (differences, estimates, alldifferences, allestimates, none); default diff
MVINCLUDE = string Whether to include units with missing values in the explanatory factors and variates and/or the y-variates (explanatory, yvariate); default * i.e. omit units with missing values in
tokens either explanatory factors or variates or y-variates
METHOD = string How to choose the best model (aic, sic, bic); default sic
PROHIBIT = string Whether to exclude models where any estimated variance parameters are held at a bound (bound); default *
Y = variates Response variates
NBESTMODEL = scalars Saves the number of the best model for each y-variate, returning a missing value if no models could be fitted successfully
SAVE = REML save structures Save structure from the analysis of the best model for each y-variate
VARANDOM allows you to try several alternative random models for a REML analysis, and then select the best one according to either their Akaike or Schwarz (Bayesian) information coefficients.
Model-definition structures for the models to be assessed must be specified using the MODELSTRUCTURES option. These are formed using the VFMODEL and VFSTRUCTURE procedures, which define the aspects
controlled by the VCOMPONENTS and VSTRUCTURE directives, respectively.
The response variate for the analysis must be specified by the Y parameter. The number of the best model can be saved, in a scalar, by the NBESTMODEL parameter; it returns a missing value if no
models could be fitted successfully The REML save structure from the analysis with the best model can be saved using the SAVE parameter.
The MVINCLUDE option controls whether units with missing values in the explanatory factors and variates and/or the y-variate are included in the analysis, as in the REML directive.
The METHOD option specifies how to assess the models
aic uses their Akaike information coefficients,
sic or bic uses their Schwarz (Bayesian) information coefficients (default).
You can set option PROHIBIT = bound, to excludes models with any estimated variance parameters held at a bound.
The PRINT option specifies the summary output to be produced about the models. The settings are mainly the same as those of the VRACCUMULATE procedure (which is used to store and then print details
of the analyses). There is also an extra setting best, which prints the description of the best model. The default is to print the best description, together with the deviance, the Akaike and Schwarz
(Bayesian) information coefficients and the number of degrees, for all the random models.
The PBEST option specifies the output to be produced from the REML analysis with the best model. Similarly, the PTRY option indicates what output should be produced for each candidate random model
when it is tried. Their settings are mainly the same as those of the PRINT option of the REML directive. There are also extra settings aic and sic (with a synonym bic) to print the Akaike and Schwarz
(Bayesian) information coefficients, respectively. The default for both these options is to produce no output.
The PTERMS option operates as in REML, to specify the terms whose means and effects are printed by PBEST and PTRY; the default is all the fixed terms. Likewise, the PSE option controls the type of
standard error that is displayed with the means and effects; the default is to give a summary of the standard errors of differences.
Options: PRINT, PBEST, PTRY, MODELSTRUCTURES, PTERMS, PSE, MVINCLUDE, METHOD, PROHIBIT.
Parameters: Y, NBESTMODEL, SAVE.
See also
Directives: REML, VCOMPONENTS, VSTRUCTURE.
Procedures: VAOPTIONS, VARECOVER, VFMODEL, VFSTRUCTURE, VMODEL, VRACCUMULATE.
Commands for: REML analysis of linear mixed models.
CAPTION 'VARANDOM example',\
'Slate Hall Farm data (Guide to REML in Genstat, Section 1.8).';\
SPLOAD '%gendir%/data/slatehall.gsh'
" define model for analysis as a randomized-blocks design "
VFMODEL [MODELSTRUCTURE=RCBD; DESCRIPTION='Randomized blocks';\
FIXED=variety] replicates
" define model for analysis as a Lattice square design "
VFMODEL [MODELSTRUCTURE=Latticesq; DESCRIPTION='Lattice square';\
FIXED=variety] replicates/(rows*columns)
" define model for analysis with an AR1 (x) AR1 model "
VFMODEL [MODELSTRUCTURE=AR1xAR1; DESCRIPTION='AR1 (x) AR1';\
FIXED=variety] fieldrow.fieldcolumn
VFSTRUCTURE [MODELSTRUCTURE=AR1xAR1; TERMS=fieldrow.fieldcolumn]\
2('AR'); ORDER=1; FACTOR=fieldrow,fieldcolumn
" define model for analysis with an AR1 (x) AR1 model + measurement error "
VFMODEL [MODELSTRUCTURE=AR1xAR1p; DESCRIPTION='AR1 (x) AR1 + plots';\
FIXED=variety] fieldrow.fieldcolumn+plotnumber
VFSTRUCTURE [MODELSTRUCTURE=AR1xAR1p; TERMS=fieldrow.fieldcolumn]\
2('AR'); ORDER=1; FACTOR=fieldrow,fieldcolumn
VARANDOM [MODELSTRUCTURES=RCBD,Latticesq,AR1xAR1,AR1xAR1p]\
yield; SAVE=savebest
VDISPLAY [PRINT=model,components,wald] savebest | {"url":"https://genstat.kb.vsni.co.uk/knowledge-base/varandom/","timestamp":"2024-11-06T07:53:00Z","content_type":"text/html","content_length":"46417","record_id":"<urn:uuid:93344000-727a-4186-ac64-656817b07f16>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00678.warc.gz"} |
Standard 4 : Part Two – Lesson 9 – Word Problems : Addition and Subtraction
Maharashtra Board Textbook Solutions for Standard Four
Lesson 9 – Word Problems : Addition and Subtraction
(1) Baburao planted 143 sweet lime trees and 156 chikoo trees in his orchard. How many trees did he plant in all ?
(2) Priyanka bought books worth ₹245 and notebooks worth ₹178. How much did she spend altogether?
(3) There are 1,230 story-books and 150 poetry books in a library. How many books are there in the library altogether?
(4) If 1,310 children, 1,505 women and 790 men came to watch the circus, how many people came to watch the circus altogether?
(5) Ajay deposited ₹18,000 in one bank and ₹15,000 in another. What is the total amount of money to be deposited in the banks?
(1) Abdul had 720 beads. He sold 648 of them. How many beads does he have left?
(2) Joseph bought tables worth ₹6,350 and chairs worth ₹3,800. How much more did the tables cost than the chairs?
(3) Ragahvrao bought seeds worth ₹3,587 and fertilizers worth ₹4,655. How much more than the seeds did the fertilizers cost?
(4) The reading of the electricity meter in Nisha’s house was 03452 on the 1st of June. On the 1st of July, it was 03531. How many units of electricity were used in June?
(5) In a census taken in 2001, the population of a village was 62,947. The census of the year 2011 listed the population as 74,405. How much did the population increase in the meantime?
(1) Rohan spent ₹27,658 for purchasing computers and ₹16,478 on a printer and scanner. Packing and transporting the goods cost extra. Rohan spent a total of ₹47,000. How much did he spend on packing
and transport?
(2) In a nursery, there were 32,140 saplings. 12,789 were mango saplings and 10,423 were teak saplings. How many other saplings were there?
(3) The seating capacity of a playground is 20,750. If there were 8,500 women and 11,200 men present for a function, how many seats remained vacant?
(4) Rambhau had ₹15,000. He bought fodder worth ₹8,570 and other animal feed worth ₹4,950. How much money did he have left?
(5) Lalitaben donated ₹75,000 to a hospital. ₹47,500 were used for equipment and ₹18,240 were used for medicines. How much money still remains?
Based on the given information, make one addition and one subtraction problem each and solve the problems.
(1) Cost of one company’s washing machine: ₹19,999; cost of another company’s washing machine: ₹21,550.
(2) ₹3,900 worth of fodder and ₹2,570 worth of other animal feed.
(3) The population of one town, 76,560; the population of another town, 57,940.
(4) The flight ticket from Mumbai to Tokyo, ₹35,840; the flight ticket from Tokyo to Los Angeles, ₹38,760.
(5) Cost of a new motorcycle, ₹46,530; cost of an old motorcycle, ₹8,500.
(6) 17,500 maths books; 13,250 science books.
(7) The bus from Kolhapur to Mumbai passes through Pune. The distance from Pune to Mumbai is 192 kilometres. The distance from Pune to Kolhapur is 235 kilometres.
(8) The capacity of one water tank is 38,500 litres; the capacity of another water tank is 22,750 litres. | {"url":"https://theshaykhacademy.com/textbook-solutions-for-maharashtra-board/standard-four/part-two-lesson-9-word-problems-addition-and-subtraction/","timestamp":"2024-11-12T21:36:58Z","content_type":"text/html","content_length":"321591","record_id":"<urn:uuid:09ae46d5-3e4c-481b-bd45-101e84cbc096>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00550.warc.gz"} |
Design of an efficient routing algorithm on the WK-recursive network - 학지사ㆍ교보문고 스콜라
KCI등재후보 학술저널
Design of an efficient routing algorithm on the WK-recursive network
Design of an efficient routing algorithm on the WK-recursive network
• : KCI등재후보
• 2022.10
• 39 - 46 (8 pages)
DOI : 10.30693/SMJ.2022.11.9.39
The WK-recursive network proposed by Vecchia and Sanges[1] is widely used in the design and implementation of local area networks and parallel processing architectures. It provides a high degree of
regularity and scalability, which conform well to a design and realization of distributed systems involving a large number of computing elements. In this paper, the routing of a message is
investigated on the WK-recursive network, which is key to the performance of this network. We present an efficient shortest path algorithm on the WK-recursive network, which is simpler than Chen and
Duh[2] in terms of design complexity.
Ⅰ. Introduction
Ⅱ. some definitions of the WK-recursive network
Ⅲ. An efficient shortest path routing algorithm on WK-recursive networks
Ⅳ. Conclusion | {"url":"https://scholar.kyobobook.co.kr/article/detail/4010036922212","timestamp":"2024-11-11T20:59:43Z","content_type":"text/html","content_length":"43245","record_id":"<urn:uuid:af562aa4-72a1-409b-a430-72f05cbe8e6d>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00766.warc.gz"} |
Arbitraging The Implied Correlation Index
Making money with a dispersion arbitrage strategy on a kept-secret index.
To understand how this strategy works, let’s first go over what the Implied Correlation index tells us.
Put simply, the Implied Correlation index measures the market’s expectations for the correlation of the volatility of the top 50 stocks in the S&P 500. This was designed to act as a proxy that
represented what’s known as diversification benefit. According to Modern Portfolio Theory, your overall risk is lower when you own a portfolio of stocks with a low correlation.
After the 2008 crisis, it became clear that just using historical volatility as a way to measure risk without looking at correlation was an incredibly dangerous game. Suddenly when widespread
volatility picked up, products that had a low correlation (price-wise) suddenly had a high volatility correlation. So it goes from having a relatively stable basket of uncorrelated securities, to
having an extremely volatile basket of uncorrelated securities.
Because the index is mainly exposed to volatility, it is somewhat correlated to the VIX and possesses many of the same mean-reverting tendencies you would expect. Here is a graph of how the two line
This post is for paid subscribers | {"url":"https://www.quant-galore.com/p/arbitraging-the-implied-correlation","timestamp":"2024-11-03T00:40:51Z","content_type":"text/html","content_length":"125664","record_id":"<urn:uuid:5dec2ad2-f395-4bd5-b1b2-261956689eda>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00608.warc.gz"} |
Newton's Method for Approximating Roots
Newton's Method for Approximating Roots
We will now look at another method for approximating roots of functions. Suppose that $y = f(x)$, and let $\alpha$ be a root of $f$, and suppose that $x_0$ is a first approximation of the root $\
alpha$ of $f$. Now consider the tangent line to the graph of $f$ at the point $(x_0, f(x_0))$. Provided that this tangent line does not have slope $0$, then this tangent line will have a root of its
own that will approximately be equal to $\alpha$.
The equation of this tangent line can be given by the following equation:
\quad p_1(x) = f(x_0) + f'(x_0)(x - x_0)
If we set $p_1(x) = 0$ and solve for $x_1$ as the $x$-intercept of the tangent line $p_1(x)$, then we obtain:
\quad p_1(x) = f(x_0) + f'(x_0)(x - x_0) \\ \quad 0 = f(x_0) + f'(x_0)(x_1 - x_0) \\ \quad x_1 = x_0 -\frac{f(x_0)}{f'(x_0)}
We now take $x_1$ (the root of the first tangent line) to be an approximation of $\alpha$. If we now look at the tangent line at $(x_1, f(x_1))$ on $f$, then we obtain a new tangent line, and
provided that the slope of this tangent line is not $0$, then this tangent line has a root of its own that is an even better approximation of $\alpha$.
The equation of this tangent line can be given by the following equation:
\quad p_2(x) = f(x_1) + f'(x_1)(x - x_1)
Once again, if we set $p_1(x) = 0$ and solve for $x_2$ as the x-intercept of the tangent line $p_2(x)$, then we obtain:
\quad x_2 = x_1 -\frac{f(x_1)}{f'(x_1)}
In fact, the more we repeat this procedure, the closer and closer our approximation gets to $\alpha$. For $n + 1$ iterations of this procedure and provided that $f'(x_i) \neq 0$ for $i = 1, 2, ...,
n$, we obtain the following general formula for the $x$-intercepts of the corresponding tangent lines as approximations to $\alpha$:
\quad x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}
Theorem 1 (Newton's Method): Suppose that $f$ is a differentiable function that contains the root $\alpha$, and $x_0$ is an approximation of $\alpha$.
Step 1: A better approximation of $x_1$ can be obtained as $x_1 = x_0 - \frac{f(x_0)}{f'(x_0)}$ provided that $f'(x_0) \neq 0$.
Step n + 1: A better approximation of $x_{n}$ can be obtained as $x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}$ provided that $f'(x_n) \neq 0$.
One advantage of Newton's Method is that the sequence of approximations $\{ x_n \}$ tend to converge much more quickly towards the root $\alpha$. The major disadvantage of Newton's Method is that the
sequence of approximations $\{ x_n \}$ may not converge to $\alpha$ if we do not choose an initial approximation $x_0$ that is sufficiently close to $\alpha$. We discuss this potential problem on the
Error Analysis of Newton's Method for Approximating Roots page. | {"url":"http://mathonline.wikidot.com/newton-s-method-for-approximating-roots","timestamp":"2024-11-13T18:47:44Z","content_type":"application/xhtml+xml","content_length":"18430","record_id":"<urn:uuid:3bc71545-8bd9-4a36-a443-ccd49e6e0415>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00756.warc.gz"} |
t-Test, Chi-Square, ANOVA, Regression, Correlation...
Statistics Software
DATAtab is an easy-to-use statistics software available at datatab.net. By eliminating the need for installation and maintenance, it is especially user-friendly, as all calculations are performed
directly in the user's web browser. In addition, DATAtab is of course DSGVO compliant.
Statistics software made simple
Especially in times when almost everyone works in a home office, it is a great advantage to be able to analyze data easily from home. As the collection and analysis of data becomes more and more
important worldwide, students and companies need to find ways to analyze their data. However, many questions arise in this context: what is correlation analysis? What is a box plot? What is a
regression model? And above all, the question: how can I calculate it as easily and quickly as possible? DATAtab was developed to solve this problem! DATAtab was designed from the ground up to be as
easy to use as possible. Thus, an online statistics software has been created that makes statistics easy.
Alternative statistics software to SPSS, SAS and Minitab
DATAtab offers a fully functional statistical software that is immediately familiar to everyone. It is very easy to enter, copy and paste data, calculate new values, define the scale of measurements
and select statistical methods. DATAtab is built on the latest web technology, so all algorithms are executed directly in the web browser and no connection to the server is required for calculations.
This makes DATAtab a super alternative statistics software to the classic, often very expensive competitors.
Statistics software at a fair price
1 Month 3 Months 1 Year 3 Years
License ends automatically (no cancellation necessary) and is valid for unlimited use by one user.
In addition to the extensive statistical methods, DATAtab provides automatic statistical and content interpretation of the results.
Extensive functions & useful tutorials
Starting from basic descriptive statistics up to advanced multivariate analysis, DATAtab offers a wide range of available statistical methods: t-tests, ANOVAs, correlation and regression,
non-parametric tests, factor analysis and cluster analysis.
DATAtab users are also supported by useful tutorials explaining theory and statistical background and giving examples from different disciplines. You will find easy-to-understand information ranging
from the level of measurements and frequency tables to hypothesis tests and regression models.
The trial version of the statistics software with the sample data set and the extensive tutorials is free for everyone. This makes it possible to easily discover the variety of statistical methods.
Statistics software developed by experienced data scientists
DATAtab statistics program was developed in 2019 by an enthusiastic and experienced team of data scientists in Austria. Mathias Jesussek, PhD in technical sciences, is an expert in the fields of
numerical mathematics and algorithms for data analysis and web development. Hannah Volk-Jesussek, PhD in social sciences, is a lecturer and researcher in the field of quantitative empirical research
methods and statistics. Several universities and colleges already use DATAtab in teaching and for master theses and give very good feedback.
Why DATAtab as a statistics program?
DATAtab is an innovative and user-friendly web app for statistical data analysis. The web app makes it possible to perform statistical analyses and data visualizations directly online in the web
browser. Users have no installation effort, get their results quickly and easily, and their data remains secure on their own PC.
DATAtab aims to make it easier for researchers, students and companies to enter the world of statistics and to make statistical data analysis as simple as possible. The web app offers a broad
spectrum of methods, ranging from descriptive statistics to linear regression analyses, logistic regression analyses, factor analyses and cluster analyses.
Overview free statistics software
Here is a list of the most popular free statistics software:
R and Python are the representatives that require programming knowledge and PSPP and Jamovi can be operated with a graphical user interface.
Statistics Software R
R is a programming language used to analyze and display data. Thus, R is a free statistics software for statistical calculations and graphics. It compiles and runs on a variety of UNIX platforms,
Windows and macOS. All components of the free statistics program are open source.
PSPP the free alternative to SPSS
GNU PSPP is a statistics program for statistical analysis of data. It is a free alternative to SPSS and is very similar to it. However, the range of functions of the free alternative to SPSS is
partly not as large as in SPSS.
Statistics program Jamovi
jamovi is a free statistics program. jamovi was developed from the ground up for ease of use and is an alternative to statistical products such as SPSS and SAS.
Statistics Software JASP
JASP is an open source project supported by the University of Amsterdam. The free statistical software JASP is in use worldwide. To use JASP, it must first be downloaded and installed.
Overview of paid statistics software
Here is a list of the most popular paid statistical software:
Statistics Software Excel
Yes, Excel can also be used as a statistical software. This requires more effort for the calculation of statistical methods, but the training effort is low if you are already familiar with Excel.
Statistics software online
Only a few statistics software solutions offer the advantage of online use, so if you want it to be quick and easy, just take a look at the statistics calculator app DATAtab. | {"url":"https://datatab.net/statistics-software","timestamp":"2024-11-05T01:12:49Z","content_type":"text/html","content_length":"36327","record_id":"<urn:uuid:b88b4076-a216-4929-b970-d5546b842013>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00484.warc.gz"} |
How the soft side of qubits hardens our QEC capabilities - Riverlane
Qubits' strength is to exist in a superposition between a 0 and 1 – but this also leaves qubits in an extremely fragile state where the slightest amount of noise causes them to break down. This is
where quantum error correction (QEC) comes in – and a new paper demonstrates how using all the data coming from a qubit can enhance its QEC protection.
The arXiv paper, Reducing the error rate of a superconducting logical qubit using analog readout information, demonstrates the positive impact of soft-information-aware decoding on a superconducting
quantum computer run by our partners at Delft University of Technology.
Let me start by explaining what this soft information is – and then how it’s used to improve our quantum error correction capabilities.
Soft information decoding
Quantum error correction works by using multiple physical qubits on the device to represent a single logical qubit, thereby introducing redundancy into the system. This redundancy provides protection
against errors. To learn about the errors that have occurred so that we can correct for them, we must make measurements. However, if you directly observe a qubit, this act of observation destroys its
quantum state and renders the qubit useless.
That’s why you need large numbers of ‘syndrome’ qubits that you can observe to infer – and then correct – data errors on the qubits.
The process of inferring the errors based on the syndrome qubit measurements is known as decoding and is performed by a decoder. The decoder takes a model of the possible errors that can occur on the
device and their effect on the measurement outcomes and uses them to work out the most likely error to have occurred. Large-scale quantum computers will generate terabytes of measurement data every
second that must be decoded as fast as it’s acquired to stop errors propagating and rendering calculations useless.
Typically, quantum decoders rely only on ‘hard’ digitised measurement outcomes – 0s and 1s – and ignore the valuable information embedded in the analogue ‘soft’ measurement signal. This information
provides insight into the likelihood of a measurement error having occurred and thus can be used by the decoder to improve its performance.
Figure 1: Measurement response of |0⟩ and |1⟩ states in IQ space for an example qubit.
When we measure a qubit, we do not directly obtain a value of 0 or 1. Instead, in the case of superconducting qubits, we obtain IQ voltages, as shown in the figure above. This figure shows the m
easurement response of |0⟩ and |1⟩ states in IQ space for an example qubit.
These IQ voltages form the ‘soft’ measurement signal. Each dot corresponds to data obtained when measuring a qubit during calibration, with the colour indicating the state the qubit was in before
measurement. We use these calibration results to decide how to classify a measurement during the real error correction experiment. If the values fall to the left of the vertical dashed line, we say
that the measurement is a 0; if it falls to the right, we say that the measurement is a 1.
Clearly, if the values are far over to the left or right, we can be pretty confident that we have classified the measurement correctly. However, if the values are close to the dashed line, we will be
more uncertain in our classification. This is useful information for the decoder. Therefore, a method for using the soft data in the decoding processed was proposed, and it has previously been
observed to improve decoding of quantum error correction experiments run on a real quantum computer.
In our work, we analysed data from QEC experiments run by collaborators at Delft University of Technology using their 17-qubit superconducting quantum computer. The group had previously demonstrated
high-fidelity logical operations with a distance-2 surface code.
We focused on decoding methods where the model of the errors that is passed to the decoder is directly learned from the experimental data, with our collaborators using a neural-network decoder while
we used a graph-based decoder. Our results show an improvement of up to 6.8% in the extracted logical error rate, a measure of the error correction performance, with the use of soft information.
While this improvement is limited, we anticipate that with faster measurement times, larger code distances or improved physical error rates, the benefits of using soft information on logical
performance will become more pronounced.
You can read the full paper here. | {"url":"https://www.riverlane.com/news/how-the-soft-side-of-qubits-hardens-our-qec-capabilities","timestamp":"2024-11-13T18:30:27Z","content_type":"text/html","content_length":"23747","record_id":"<urn:uuid:d1b53179-1997-48da-bcdb-a5fa6d50fe8c>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00789.warc.gz"} |
Sig Fig Calculator (Significant Figures)
Calculate the significant figures in a number or round a number using the sig fig calculator below. For numbers in scientific notation, use e notation (e.g. 3.1415e5).
Count of Significant Figures:
Count of Significant Figures:
The Significant Figures Are:
Result Rounded to Significant Figures:
What are Significant Figures?
The significant figures of a number, also referred to as its significant digits, are the digits in a number that are meaningful in expressing its precision. In other words, these are the digits that
provide meaning to the overall number.
Significant figures are most commonly used when making measurements and are the important digits that tell us something about the precision of the number or measurement, and are often used to
simplify or round a number without losing that precision.
Significant figures do not quantify the size of a number but rather the level of precision, which is useful in converting from decimal to scientific notation or standard form.
How to Find Significant Figures
Since not all digits in a number are significant, there are a series of rules to follow to find which are the significant figures.
Significant Figures Rules
The following are the rules for finding significant figures:^[1]
• All non-zero numbers ARE significant. There are three digits in the number 3.14 that are significant because each digit is non-zero.
• Zeros between non-zero digits ARE significant. The zero in the number 4.605 is significant because it’s between two non-zero numbers.
• Leading zeros ARE NOT significant. Any zero to the left of non-zero digits ARE NOT significant. The zero in the number 0.47 is not significant.
• Trailing zeros in a number with a decimal point ARE significant, but trailing zeros in a number without a decimal point ARE NOT significant. The zero in the number 470 is not significant, but the
zeros in 470.0 are significant.
When a number has a decimal point but no trailing digits, the decimal point indicates that all the digits in the number are significant. For instance, the number 750 has precision to the one’s place
and thus has three significant digits.
You can use an overline to indicate the last significant figure in a number. However, some people choose to use an underline in place of an overline.
Table showing the significant figures for various numbers
Count of Significant Figures Significant Figures
23 2 2, 3
7.5 2 7, 5
0.00025 2 2, 5
10.7 3 1, 0, 7
-12.208 5 1, 2, 2, 0, 8
63500 3 6, 3, 5
63500. 5 6, 3, 5, 0, 0
63500.0 6 6, 3, 5, 0, 0, 0
How to Round Significant Figures
It is common to round a number to a specified number of significant figures, and the process is similar to rounding a decimal. Follow these steps to round a number with significant figures found
using the sig fig rules above.
Step One: Find Significant Figures
The first step to round a number to a sig fig is to find the significant digits in a number. Follow the rules above to find the figures that are significant, then move to the next step.
Step Two: Use Rounding Rules
Once you’ve found the significant figures, use standard rounding rules to round the number to the specified precision. The difference between sig fig rounding and standard decimal rounding is that
the rounding point is the significant digit indicated by the precision rather than the decimal place.
Putting it all Together
Let’s follow the steps above to round 03570 to two significant figures.
The number 03540 has three significant digits: [3, 5, 7]
Round the number to the position of the second sig fig, which is 5:
03570 -> 3600
Sig Fig Rounding Examples
Table showing a number rounded to various significant
Sig Fig Precision Rounded to Significant Figures
375.09 0 –
375.09 1 400
375.09 2 380
375.09 3 375
375.09 4 375.1
375.09 5 375.09
375.09 6 375.090
375.09 10 375.0900000
Frequently Asked Questions
Why do we use significant figures?
We use significant figures to express the precision of a number or, more specifically, of a measurement. For example, if we measure the length of a box with a ruler and the smallest ticks on the
ruler are the centimeter ticks, then we only have precision down to the number of centimeters.
Therefore, the number of centimeters measured would be significant digits, but not any fraction of a centimeter since we only have precision to whole centimeters.
Are exact numbers significant figures?
Yes, exact numbers have all significant figures, following the rules listed above.
Are significant figures the same as significant digits?
Yes, significant digits is another term often used interchangeably with significant figures.
Are numbers with more significant figures more accurate?
This is a common misconception. Numbers with more significant figures are not necessarily more accurate, but they are certainly more precise. Accuracy and precision are two distinct concepts.
You may also find our numbers to words converter useful for converting large numbers to word form.
Visual Guide to Significant Figures
The infographic below demonstrates the four rules for significant figures.
1. Columbia Center for Teaching and Learning, Significant Figures, https://ccnmtl.columbia.edu/projects/mmt/frontiers/web/chapter_5/6665.html | {"url":"https://www.inchcalculator.com/sig-fig-calculator/","timestamp":"2024-11-07T14:14:22Z","content_type":"text/html","content_length":"83114","record_id":"<urn:uuid:f17d5c36-e0ee-48f2-b807-20330c9df699>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00003.warc.gz"} |
Netway Incorporated
Electronic and Opto-Electronic Sub-Systems, Software Tools and complete Environments for Design and Operation of Hybrid - Quantum and Classical - Computing enabled reference products
Who we are
The scaling of computing and communication has led to a large wealth creation and lifted large populations from poverty to middle class but can no longer be continued due to our running into physical
limits. The use of multi-processing and of many accelerators produces diminishing returns. In particular the use of multi-processing lowers the compute time by computing in parallel but incurs
communication time cost to communicate an increasing number of intermediate results from producers to consumers. Quantum computing, by making available new resources, is seen as able to provide an
exponential advantage in complexity of computation performed and/or amount of computational resources required.
With COVID in recent past, likes of Moderna and Pfizer have become, if they were not already, household names. The amount of good they do and their market valuation is often in direct proportion to
the number of 2, 5 and now even 10 billion dollars a year revenue drugs in their portfolio. As broad cures for diseases such as Cancer are yet to be found and as just as SAARS was followed by COVID,
new pandemics will follow COVID and will need vaccines and therapies, there is room for many block-buster drugs.
While medicines are only used by unwell people, materials are used by unwell and well people and also industries and governments and can potentially produce revenues of 20, 50 or even 100 billion
dollars a year!
Some example materials are those used in making of batteries. The materials give a battery a certain capacity, a certain weight and a certain probability of explosion. Can materials be found that
quadruple the capacity and reduce the weight and probability of explosion to a quarter?
Today with help of computing, including learning-aided-computing (‘AI’), we can already map properties desired in a drug or a material to candidate molecular structures and even find paths in a
reaction network from nodes corresponding to readily found molecules to the desired molecule. However the difficulty lies in the fact that the number of candidate molecules can be very large and
making and testing them in the laboratory can takes 1, 2, 5 or even 10 years!
The solution to the basic problem as famously prescribed by Feynman is simulation of Quantum Chemistry using Quantum Computers and, in this particular case, to replace making-and-testing in the
laboratory with such simulations. However efficiently performing using a Quantum Computer a task hard for classical computers requires an appropriate quantum algorithm.
While quantum algorithms to efficiently simulate the evolution of a system of many parti- cles exist, those for predicting the ground state of the Hamiltonian describing a system of many particles
has to resort to heuristics as the latter problem belongs in the complexity class QMA, the quantum analog of NP. It further turns out that the latter problem can be solved by learning from a
polynomial number of example problem-solution pairs.
It is now well-established that learning for performing and performing learning aided tasks employing classical computing are done cost-performance efficiently if done employing hardware
acceleration. The plentiful availability of hardware acceleration for Deep Neural Networks (DNNs) (1) has encouraged their use as sub-algorithms in larger - Reinforcement Learning (RL) (2),
Generative-Adversarial Networks - algorithms. RL and GAN (and use of DNN by the RL and GANs for policy (or actor) and value (or critic) function approximation) have established themselves as
advantageous means of performing tasks autonomously and of performing synthesis tasks. DNN learning is susceptible to inefficiencies and failures including vanishing/exploding gradients and local
The two primary means of achieving efficiency in DNN inference - sparsity and parallelism - conflict. The simplest Quantum Algorithm - the Deutsch Jozsa - exploits the quantum resources of
Superposition to evaluate all alternatives in parallel (referred to as in superposition) and the resource Interference to (interfere away all-but and) be left with the optimal altaernative. Most
sophisticated recent algorithm, that for solving unstructured NP search problem, takes a problem - decoding a list- recoverable code - known to be classically hard and presents a polynomial quantum
algorithm for the problem.
This implies that given a drug or material design or discovery job, the job needs to be divided into tasks such that some tasks are best done on a quantum computer, some on a classical computer
employing learning and rest on classical computers and a framework is required to manage the division, identification of the mode with highest performance for each task, the mode that is optimal
considering both the required performance and available resources at run-time must be employed.
Netway Inc has significant intellectual properties in all of the areas of - quantum algorithms and/or efficient realization of quantum algorithms with demonstrable exponentail advantage over
classical computing, DNN hardware acceleration and learning aided task-data placement, task scheduling, data routing and congestion management.
Soon to be announced solutions in the Netway Inc portfolio are attanged into three groups and are driven by a corresponing Buisiness Unit. Solution groups are - Custom Materials and Medicinal
Compounds, Accelerator-Rich many-core Application Acceleration Processors and Virtual-Private Hybrid Computing Clouds. | {"url":"http://www.netwayinc.com/","timestamp":"2024-11-07T10:31:07Z","content_type":"application/xhtml+xml","content_length":"15064","record_id":"<urn:uuid:460a001b-f0cf-4e21-a721-935796f58649>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00212.warc.gz"} |
The logit function is the inverse of the sigmoid or logistic function, and transforms a continuous value (usually probability \(p\)) in the interval [0,1] to the real line (where it is usually the
logarithm of the odds). The logit function is \(\log(p / (1-p))\).
The invlogit function (called either the inverse logit or the logistic function) transforms a real number (usually the logarithm of the odds) to a value (usually probability \(p\)) in the interval
[0,1]. The invlogit function is \(\frac{1}{1 + \exp(-x)}\).
If \(p\) is a probability, then \(\frac{p}{1-p}\) is the corresponding odds, while the logit of \(p\) is the logarithm of the odds. The difference between the logits of two probabilities is the
logarithm of the odds ratio. The derivative of probability \(p\) in a logistic function (such as invlogit) is: \(\frac{d}{dx} = p(1-p)\).
In the LaplacesDemon package, it is common to re-parameterize a model so that a parameter that should be in an interval can be updated from the real line by using the logit and invlogit functions,
though the interval function provides an alternative. For example, consider a parameter \(\theta\) that must be in the interval [0,1]. The algorithms in IterativeQuadrature, LaplaceApproximation,
LaplacesDemon, PMC, and VariationalBayes are unaware of the desired interval, and may attempt \(\theta\) outside of this interval. One solution is to have the algorithms update logit(theta) rather
than theta. After logit(theta) is manipulated by the algorithm, it is transformed via invlogit(theta) in the model specification function, where \(\theta \in [0,1]\). | {"url":"https://www.rdocumentation.org/packages/LaplacesDemon/versions/16.1.6/topics/logit","timestamp":"2024-11-02T00:00:01Z","content_type":"text/html","content_length":"68945","record_id":"<urn:uuid:a18379f6-e4b7-4716-8ce6-e43b81396674>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00808.warc.gz"} |
Formula Applet
The Formula Applet
□ lets you create math or physics exercises on HTML pages using H5P.
□ However, the corresponding H5P plugin is not published in the H5P hub.
□ You can provide an expression as solution, and Formula Applet checks if your solution is right or wrong.
□ Algebraic equivalent solutions are also accepted.
□ Formula Applet is able to deal with physical units.
□ Formula Applet is open source.
□ FormulaApplet is written in JavaScript. These open source libraries are used: MathQuill, jQuery and Hammer.
• Check solution:
• Try again:
• The solutions are given in each case, so you can try them quickly. Solutions can be copied with copy and paste.
• Development of the FormulaApplet was discontinued in 2024.
Algebra Examples
9x – 3,5
9x – 3.5
9x – 3 \frac{1}{2}
Physics Example
A voltage of
U=24\ \unit{V}
is applied to a resistor of
R=0,30\ \unit{M\Omega}
What is the current I that flows through the resistor?
80\unit{µA} = 0.080\unit{mA} = 8.0\cdot10^{-5}\unit{A} | {"url":"http://www.formelapplet.de/en/home-2/","timestamp":"2024-11-03T00:41:58Z","content_type":"text/html","content_length":"54392","record_id":"<urn:uuid:02a0abcc-493e-4ab7-ad2b-09a3be23a0ed>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00170.warc.gz"} |
531 Calculator – Easy & Accurate Calculation Tool
This tool helps you calculate your weekly lift goals based on the 531 strength training program.
How to Use the 531 Calculator
To use the 531 calculator, you need to know your One-Rep Max (1RM) for a given lift. Follow these steps:
1. Enter your One-Rep Max (1RM) in the input box.
2. Select the training week from the dropdown menu.
3. Click the Calculate button.
4. The calculator will display the recommended weights and repetitions for that week.
How It Calculates the Results
The 531 calculator uses the following formula to determine the recommended weights:
• Week 1: 65% x Training Max (TM) for 5 reps, 75% x TM for 5 reps, 85% x TM for 5+ reps
• Week 2: 70% x TM for 3 reps, 80% x TM for 3 reps, 90% x TM for 3+ reps
• Week 3: 75% x TM for 5 reps, 85% x TM for 3 reps, 95% x TM for 1+ reps
• Week 4: 40% x TM for 5 reps, 50% x TM for 5 reps, 60% x TM for 5 reps
Your Training Max (TM) is calculated as 90% of your One-Rep Max (1RM).
Limitations of the Calculator
This calculator assumes you have an accurate One-Rep Max (1RM). Ensure you safely determine your 1RM beforehand. It does not account for individual differences in physiology or recovery capacity.
Always consult with a qualified coach or trainer for personalized advice.
Use Cases for This Calculator
Use Case 1: Addition
With the 531 calculator, you can easily add up numbers. Whether you’re calculating expenses, scores, or anything else, just input the numbers and hit the addition button to get the sum instantly.
Use Case 2: Subtraction
Need to subtract values quickly? No problem! The 531 calculator lets you subtract numbers effortlessly. Just enter the values and hit the subtraction button to get the result right away.
Use Case 3: Multiplication
When you have numbers to multiply, the 531 calculator is here to help. Input the values you want to multiply and press the multiplication button to get the product in an instant.
Use Case 4: Division
Dividing numbers is a breeze with the 531 calculator. Simply input the dividend and divisor, then hit the division button to get the quotient promptly.
Use Case 5: Percentage Calculation
Calculating percentages is made simple with the 531 calculator. Enter the percentage value and the number you want to calculate it against, then press the percentage button to find out the result
Use Case 6: Exponential Calculation
If you need to raise a number to a power, the 531 calculator is your go-to tool. Input the base and exponent, then click the exponential button to get the result swiftly.
Use Case 7: Square Root Calculation
Want to find the square root of a number? The 531 calculator has your back. Enter the number and click the square root button to get the square root value instantly.
Use Case 8: Clearing the Calculator
If you’ve made a mistake or want to start fresh, simply hit the clear button on the 531 calculator to reset the values and calculations, allowing you to begin anew without any hassle.
Use Case 9: Decimal Calculations
Whether you’re dealing with whole numbers or decimals, the 531 calculator accommodates both effortlessly. Perform calculations involving decimal points with precision and accuracy.
Use Case 10: Memory Function
Need to store a number temporarily for calculations? The 531 calculator offers a memory function, allowing you to save a value for later use and retrieve it whenever needed, enhancing your computing | {"url":"https://madecalculators.com/531-calculator/","timestamp":"2024-11-09T16:17:02Z","content_type":"text/html","content_length":"145312","record_id":"<urn:uuid:257a53c4-020a-4a08-a622-b06de1cd0679>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00603.warc.gz"} |
Divide irregular polygon into equal areas by angle
01-15-2018 01:20 PM
I have an irregular polygon feature that I need to split into x number of equal area features. I need to be able to have the divisions be something other than north/south or east/west.
The black polygon in this case needs to be divided into 3 equal area polygons. I manually split these, but I need it to be more precise and the splits need to be aligned in parallel to each other.
They need to be able to be calculated with an angle/rotation specification, and not just north/south or east/west.
What I have tried:
• This excellent code is the closest I have gotten to splitting an irregular polygon up into equal are.... Unfortunately, it only runs for north/south or east/west lines. I can't figure out how to
rotate it and I don't have enough coding experience/time to efficiently edit it for my needs.
• Fishnet Tool (as suggested by this, this, this, and this) and this has an angle of rotation. However, this creates a grid, but not equal areas. Also, it is for the feature extent not shape.
• I cannot use the Parcel Fabrics tool (as suggested by this, this, this, and this) as I don't have the necessary licensing for it.
It really blows my mind that this isn't a tool that exists/is readily available.
Any advice/suggestions is welcomed. Or, if you really want to be a super hero, edit the code so I can specify an angle for rotation. Thank you very much!
01-15-2018 01:54 PM
01-15-2018 01:54 PM
01-16-2018 06:30 AM
01-15-2018 01:56 PM
01-16-2018 06:31 AM
01-16-2018 01:20 PM
01-16-2018 01:26 PM
01-17-2020 07:27 PM | {"url":"https://community.esri.com/t5/geoprocessing-questions/divide-irregular-polygon-into-equal-areas-by-angle/m-p/86939","timestamp":"2024-11-09T18:42:28Z","content_type":"text/html","content_length":"370629","record_id":"<urn:uuid:0807bbbe-88dc-4c6a-a4e6-20afdb6678df>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00135.warc.gz"} |
Fiat Shamir via List-Recoverable Codes (or: Parallel Repetition of GMW is not Zero-Knowledge)
In a seminal work, Goldreich, Micali and Wigderson (CRYPTO '86) demonstrated the wide applicability of zero-knowledge proofs by constructing such a proof system for the NP-complete problem of graph
3-coloring. A long-standing open question has been whether parallel repetition of their protocol preserves zero knowledge. In this work, we answer this question in the negative, assuming a standard
cryptographic assumption (i.e., the hardness of learning with errors (LWE)). Leveraging a connection observed by Dwork, Naor, Reingold, and Stockmeyer (FOCS '99), our negative result is obtained by
making positive progress on a related fundamental problem in cryptography: securely instantiating the Fiat-Shamir heuristic for eliminating interaction in public-coin interactive protocols. A recent
line of work has shown how to instantiate the heuristic securely, albeit only for a limited class of protocols. Our main result shows how to instantiate Fiat-Shamir for parallel repetitions of much
more general interactive proofs. In particular, we construct hash functions that, assuming LWE, securely realize the Fiat-Shamir transform for the following rich classes of protocols: 1) The parallel
repetition of any "commit-and-open"protocol (such as the GMW protocol mentioned above), when a specific (natural) commitment scheme is used. Commit-and-open protocols are a ubiquitous paradigm for
constructing general purpose public-coin zero knowledge proofs. 2) The parallel repetition of any base protocol that (1) satisfies a stronger notion of soundness called round-by-round soundness, and
(2) has an efficient procedure, using a suitable trapdoor, for recognizing "bad verifier randomness"that would allow the prover to cheat. Our results are obtained by establishing a new connection
between the Fiat-Shamir transform and list-recoverable codes. In contrast to the usual focus in coding theory, we focus on a parameter regime in which the input lists are extremely large, but the
rate can be small. We give a (probabilistic) construction based on Parvaresh-Vardy codes (FOCS '05) that suffices for our applications.
Original language English (US)
Title of host publication STOC 2021 - Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing
Editors Samir Khuller, Virginia Vassilevska Williams
Publisher Association for Computing Machinery
Pages 750-760
Number of pages 11
ISBN (Electronic) 9781450380539
State Published - Jun 15 2021
Externally published Yes
Event 53rd Annual ACM SIGACT Symposium on Theory of Computing, STOC 2021 - Virtual, Online, Italy
Duration: Jun 21 2021 → Jun 25 2021
Publication series
Name Proceedings of the Annual ACM Symposium on Theory of Computing
ISSN (Print) 0737-8017
Conference 53rd Annual ACM SIGACT Symposium on Theory of Computing, STOC 2021
Country/Territory Italy
City Virtual, Online
Period 6/21/21 → 6/25/21
All Science Journal Classification (ASJC) codes
• Fiat-Shamir heuristic
• cryptographic protocols
• list-recoverable codes
• zero-knowledge protocols
Dive into the research topics of 'Fiat Shamir via List-Recoverable Codes (or: Parallel Repetition of GMW is not Zero-Knowledge)'. Together they form a unique fingerprint. | {"url":"https://collaborate.princeton.edu/en/publications/fiat-shamir-via-list-recoverable-codes-or-parallel-repetition-of-","timestamp":"2024-11-03T13:12:16Z","content_type":"text/html","content_length":"58238","record_id":"<urn:uuid:f9fdfb0a-51d9-46b6-a613-b95b92f32178>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00281.warc.gz"} |
A team of five is to be selected from amongst five boys A, B, C, D and E and four girls P, Q, R and S. Some criteria for selecation are :A and S have to be together.P cannot be put with R.D and Q cannot be together.C and E have to be together.R cannot be put with B.Unless otherwise stated, these criteria are applicable to all the question below :
If two of the members have to be boys, the team will consist of : - Selection Based On Given Conditions - Puzzle Test - Verbal Reasoning Questions and Answers
A team of five is to be selected from amongst five boys A, B, C, D and E and four girls P, Q, R and S. Some criteria for selecation are :
A and S have to be together.
P cannot be put with R.
D and Q cannot be together.
C and E have to be together.
R cannot be put with B.
Unless otherwise stated, these criteria are applicable to all the question below : If two of the members have to be boys, the team will consist of : | {"url":"https://edugoog.com/details/p-a-team-of-five-is-to-be-selected-from-amongst-five-boys-a-b-c-d-and-e-and-four-girls-p-q-r-and-s-some-criteria-for-selecation-are-br-a-and-s-have-to-be-together-br-p-cannot-be-put-with-r-br-d-and-q-cannot-be-together-br-c-and-e-have-to-be-together-br.html","timestamp":"2024-11-07T12:03:01Z","content_type":"text/html","content_length":"105445","record_id":"<urn:uuid:362f6674-cc60-44c3-be8d-47f0d3b1b3b9>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00151.warc.gz"} |
Why does the Dividend Aristocrats Index beat the S&P 500?
The thread is predicated on the following chart, asking why the white line beats the orange line:
Upon inspection, the white line soars above the orange line in 2014, and both lines represent total return:
Next, for comparison, I showed the total return in 2014 of two actual funds which follow these indexes, and we see one does not soar over the other:
It's a rather small bump, not soaring. In fact, if we look at the fund which tracks the Dividend Aristocrats, from its inception to today, it's actually lagging the fund which tracks the S&P500:
All charts in this post are total return charts. Indeed, there is a growing gap that can't be explained by the the expense ratio, or by reinvesting dividends, because they are already included here.
At this point I'm genuinely curious. Why can't the dramatic outperformance of the Dividend Aristocrats Index be replicated in the real world? | {"url":"https://forum.mrmoneymustache.com/investor-alley/why-does-the-dividend-aristocrats-index-beat-the-sp-500/50/","timestamp":"2024-11-04T04:23:20Z","content_type":"application/xhtml+xml","content_length":"60101","record_id":"<urn:uuid:8b45bc57-0324-4625-b683-6bea3118cd08>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00736.warc.gz"} |
Fractions 80134 - math word problem (80134)
Fractions 80134
There are 420 pupils in the school. Two hundred fifty-two pupils go to the 1st level. Write as a fraction what part of the pupils goes to the 1st grade and what part to the 2nd grade. Shorten both
fractions to their basic form.
Correct answer:
Did you find an error or inaccuracy? Feel free to
write us
. Thank you!
Tips for related online calculators
Need help calculating sum, simplifying, or multiplying fractions? Try our
fraction calculator
You need to know the following knowledge to solve this word math problem:
Grade of the word problem:
Related math problems and questions: | {"url":"https://www.hackmath.net/en/math-problem/80134","timestamp":"2024-11-04T07:15:08Z","content_type":"text/html","content_length":"56815","record_id":"<urn:uuid:067a8433-7ec4-49f2-a686-67f3ea61eec1>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00519.warc.gz"} |
Explorations of Mathematical Models in Biology with Maple - Maplesoft Books
As biology increasingly depends on data, algorithms, and models, it has become necessary to use a computing language, such as the user-friendly Maple, to focus more on building and analyzing models
as opposed to configuring tedious calculations. Explorations of Mathematical Models in Biology with Maple provides an introduction to model creation using Maple, followed by the translation,
analysis, interpretation, and observation of the models.
With an integrated and interdisciplinary approach that embeds mathematical modeling into biological applications, the book illustrates numerous applications of mathematical techniques within biology,
ecology, and environmental sciences. Featuring a quantitative, computational, and mathematical approach, the book includes:
• Examples of real-world applications, such as population dynamics, genetics, drug administration, interacting species, and the spread of contagious diseases, to showcase the relevancy and wide
applicability of abstract mathematical techniques
• Discussion of various mathematical concepts, such as Markov chains, matrix algebra, eigenvalues, eigenvectors, first-order linear difference equations, and nonlinear first order difference
• Coverage of difference equations to model a wide range of real-life discrete time situations in diverse areas as well as discussions on matrices to model linear problems
• Solutions to selected exercises and additional Maple codes
Explorations of Mathematical Models in Biology with Maple is an ideal textbook for undergraduate courses in mathematical models in biology, theoretical ecology, bioeconomics, forensic science,
applied mathematics, and environmental science. The book is also an excellent reference for biologists, ecologists, mathematicians, biomathematicians, and environmental and resource economists. | {"url":"https://cn.maplesoft.com/books/details.aspx?id=456&ref=facebook,facebook&L=C","timestamp":"2024-11-10T12:03:34Z","content_type":"application/xhtml+xml","content_length":"70773","record_id":"<urn:uuid:664bf82b-703a-4243-8f8e-721ea8227733>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00423.warc.gz"} |
The distance (or perpendicular distance) from a point to a line is the shortest distance from a fixed point to any point on a fixed infinite line in Euclidean geometry. It is the length of the line
segment which joins the point to the line and is perpendicular to the line. The formula for calculating it can be derived and expressed in several ways.
Knowing the shortest distance from a point to a line can be useful in various situations—for example, finding the shortest distance to reach a road, quantifying the scatter on a graph, etc. In Deming
regression, a type of linear curve fitting, if the dependent and independent variables have equal variance this results in orthogonal regression in which the degree of imperfection of the fit is
measured for each data point as the perpendicular distance of the point from the regression line.
Cartesian coordinates
Line defined by an equation
In the case of a line in the plane given by the equation ax + by + c = 0, where a, b and c are real constants with a and b not both zero, the distance from the line to a point (x[0],y[0]) is^[1]^[2]^
${\displaystyle \operatorname {distance} (ax+by+c=0,(x_{0},y_{0}))={\frac {|ax_{0}+by_{0}+c|}{\sqrt {a^{2}+b^{2}}}}.}$
The point on this line which is closest to (x[0],y[0]) has coordinates:^[3]
${\displaystyle x={\frac {b(bx_{0}-ay_{0})-ac}{a^{2}+b^{2}}}{\text{ and }}y={\frac {a(-bx_{0}+ay_{0})-bc}{a^{2}+b^{2}}}.}$
Horizontal and vertical lines
In the general equation of a line, ax + by + c = 0, a and b cannot both be zero unless c is also zero, in which case the equation does not define a line. If a = 0 and b ≠ 0, the line is horizontal
and has equation y = -c/b. The distance from (x[0], y[0]) to this line is measured along a vertical line segment of length |y[0] - (-c/b)| = |by[0] + c| / |b| in accordance with the formula.
Similarly, for vertical lines (b = 0) the distance between the same point and the line is |ax[0] + c| / |a|, as measured along a horizontal line segment.
Line defined by two points
If the line passes through two points P[1]=(x[1],y[1]) and P[2]=(x[2],y[2]) then the distance of (x[0],y[0]) from the line is:
${\displaystyle \operatorname {distance} (P_{1},P_{2},(x_{0},y_{0}))={\frac {|(y_{2}-y_{1})x_{0}-(x_{2}-x_{1})y_{0}+x_{2}y_{1}-y_{2}x_{1}|}{\sqrt {(y_{2}-y_{1})^{2}+(x_{2}-x_{1})^{2}}}}.}$
The denominator of this expression is the distance between P[1] and P[2]. The numerator is twice the area of the triangle with its vertices at the three points, (x[0],y[0]), P[1] and P[2]. See: Area
of a triangle § Using coordinates. The expression is equivalent to ${\textstyle h={\frac {2A}{b}}}$ , which can be obtained by rearranging the standard formula for the area of a triangle: ${\
textstyle A={\frac {1}{2}}bh}$ , where b is the length of a side, and h is the perpendicular height from the opposite vertex.
An algebraic proof
This proof is valid only if the line is neither vertical nor horizontal, that is, we assume that neither a nor b in the equation of the line is zero.
The line with equation ax + by + c = 0 has slope -a/b, so any line perpendicular to it will have slope b/a (the negative reciprocal). Let (m, n) be the point of intersection of the line ax + by + c =
0 and the line perpendicular to it which passes through the point (x[0], y[0]). The line through these two points is perpendicular to the original line, so
${\displaystyle {\frac {y_{0}-n}{x_{0}-m}}={\frac {b}{a}}.}$
Thus, ${\displaystyle a(y_{0}-n)-b(x_{0}-m)=0,}$ and by squaring this equation we obtain:
${\displaystyle a^{2}(y_{0}-n)^{2}+b^{2}(x_{0}-m)^{2}=2ab(y_{0}-n)(x_{0}-m).}$
Now consider,
${\displaystyle (a(x_{0}-m)+b(y_{0}-n))^{2}=a^{2}(x_{0}-m)^{2}+2ab(y_{0}-n)(x_{0}-m)+b^{2}(y_{0}-n)^{2}=(a^{2}+b^{2})((x_{0}-m)^{2}+(y_{0}-n)^{2})}$
using the above squared equation. But we also have,
${\displaystyle (a(x_{0}-m)+b(y_{0}-n))^{2}=(ax_{0}+by_{0}-am-bn)^{2}=(ax_{0}+by_{0}+c)^{2}}$
since (m, n) is on ax + by + c = 0. Thus,
${\displaystyle (a^{2}+b^{2})((x_{0}-m)^{2}+(y_{0}-n)^{2})=(ax_{0}+by_{0}+c)^{2}}$
and we obtain the length of the line segment determined by these two points,
${\displaystyle d={\sqrt {(x_{0}-m)^{2}+(y_{0}-n)^{2}}}={\frac {|ax_{0}+by_{0}+c|}{\sqrt {a^{2}+b^{2}}}}.}$ ^[4]
A geometric proof
Diagram for geometric proof
This proof is valid only if the line is not horizontal or vertical.^[5]
Drop a perpendicular from the point P with coordinates (x[0], y[0]) to the line with equation Ax + By + C = 0. Label the foot of the perpendicular R. Draw the vertical line through P and label its
intersection with the given line S. At any point T on the line, draw a right triangle TVU whose sides are horizontal and vertical line segments with hypotenuse TU on the given line and horizontal
side of length |B| (see diagram). The vertical side of ∆TVU will have length |A| since the line has slope -A/B.
∆PRS and ∆TVU are similar triangles, since they are both right triangles and ∠PSR ≅ ∠TUV since they are corresponding angles of a transversal to the parallel lines PS and UV (both are vertical
lines).^[6] Corresponding sides of these triangles are in the same ratio, so:
${\displaystyle {\frac {|{\overline {PR}}|}{|{\overline {PS}}|}}={\frac {|{\overline {TV}}|}{|{\overline {TU}}|}}.}$
If point S has coordinates (x[0],m) then |PS| = |y[0] - m| and the distance from P to the line is:
${\displaystyle |{\overline {PR}}|={\frac {|y_{0}-m||B|}{\sqrt {A^{2}+B^{2}}}}.}$
Since S is on the line, we can find the value of m,
${\displaystyle m={\frac {-Ax_{0}-C}{B}},}$
and finally obtain:^[7]
${\displaystyle |{\overline {PR}}|={\frac {|Ax_{0}+By_{0}+C|}{\sqrt {A^{2}+B^{2}}}}.}$
A variation of this proof is to place V at P and compute the area of the triangle ∆UVT two ways to obtain that ${\displaystyle D|{\overline {TU}}|=|{\overline {VU}}||{\overline {VT}}|}$ where D is
the altitude of ∆UVT drawn to the hypoteneuse of ∆UVT from P. The distance formula can then used to express ${\displaystyle |{\overline {TU}}|}$ , ${\displaystyle |{\overline {VU}}|}$ , and ${\
displaystyle |{\overline {VT}}|}$ in terms of the coordinates of P and the coefficients of the equation of the line to get the indicated formula.
A vector projection proof
Diagram for vector projection proof
Let P be the point with coordinates (x[0], y[0]) and let the given line have equation ax + by + c = 0. Also, let Q = (x[1], y[1]) be any point on this line and n the vector (a, b) starting at point Q
. The vector n is perpendicular to the line, and the distance d from point P to the line is equal to the length of the orthogonal projection of ${\displaystyle {\overrightarrow {QP}}}$ on n. The
length of this projection is given by:
${\displaystyle d={\frac {|{\overrightarrow {QP}}\cdot \mathbf {n} |}{\|\mathbf {n} \|}}.}$
${\displaystyle {\overrightarrow {QP}}=(x_{0}-x_{1},y_{0}-y_{1}),}$ so ${\displaystyle {\overrightarrow {QP}}\cdot \mathbf {n} =a(x_{0}-x_{1})+b(y_{0}-y_{1})}$ and ${\displaystyle \|\mathbf {n} \
|={\sqrt {a^{2}+b^{2}}},}$
${\displaystyle d={\frac {|a(x_{0}-x_{1})+b(y_{0}-y_{1})|}{\sqrt {a^{2}+b^{2}}}}.}$
Since Q is a point on the line, ${\displaystyle c=-ax_{1}-by_{1}}$ , and so,^[8]
${\displaystyle d={\frac {|ax_{0}+by_{0}+c|}{\sqrt {a^{2}+b^{2}}}}.}$
Another formula
It is possible to produce another expression to find the shortest distance of a point to a line. This derivation also requires that the line is not vertical or horizontal.
The point P is given with coordinates (${\displaystyle x_{0},y_{0}}$ ). The equation of a line is given by ${\displaystyle y=mx+k}$ . The equation of the normal of that line which passes through the
point P is given ${\displaystyle y={\frac {x_{0}-x}{m}}+y_{0}}$ .
The point at which these two lines intersect is the closest point on the original line to the point P. Hence:
${\displaystyle mx+k={\frac {x_{0}-x}{m}}+y_{0}.}$
We can solve this equation for x,
${\displaystyle x={\frac {x_{0}+my_{0}-mk}{m^{2}+1}}.}$
The y coordinate of the point of intersection can be found by substituting this value of x into the equation of the original line,
${\displaystyle y=m{\frac {(x_{0}+my_{0}-mk)}{m^{2}+1}}+k.}$
Using the equation for finding the distance between 2 points, ${\displaystyle d={\sqrt {(X_{2}-X_{1})^{2}+(Y_{2}-Y_{1})^{2}}}}$ , we can deduce that the formula to find the shortest distance between
a line and a point is the following:
${\displaystyle d={\sqrt {\left({{\frac {x_{0}+my_{0}-mk}{m^{2}+1}}-x_{0}}\right)^{2}+\left({m{\frac {x_{0}+my_{0}-mk}{m^{2}+1}}+k-y_{0}}\right)^{2}}}={\frac {|k+mx_{0}-y_{0}|}{\sqrt {1+m^
Recalling that m = -a/b and k = - c/b for the line with equation ax + by + c = 0, a little algebraic simplification reduces this to the standard expression.^[9]
Vector formulation
Illustration of the vector formulation.
The equation of a line can be given in vector form:
${\displaystyle \mathbf {x} =\mathbf {a} +t\mathbf {n} }$
Here a is the position of a point on the line, and n is a unit vector in the direction of the line. Then as scalar t varies, x gives the locus of the line.
The distance of an arbitrary point p to this line is given by
${\displaystyle \operatorname {distance} (\mathbf {x} =\mathbf {a} +t\mathbf {n} ,\mathbf {p} )=\|(\mathbf {a} -\mathbf {p} )-((\mathbf {a} -\mathbf {p} )\cdot \mathbf {n} )\mathbf {n} \|.}$
This formula can be derived as follows: ${\displaystyle \mathbf {a} -\mathbf {p} }$ is a vector from p to the point a on the line. Then ${\displaystyle (\mathbf {a} -\mathbf {p} )\cdot \mathbf {n} }$
is the projected length onto the line and so
${\displaystyle ((\mathbf {a} -\mathbf {p} )\cdot \mathbf {n} )\mathbf {n} }$
is a vector that is the projection of ${\displaystyle \mathbf {a} -\mathbf {p} }$ onto the line. Thus
${\displaystyle (\mathbf {a} -\mathbf {p} )-((\mathbf {a} -\mathbf {p} )\cdot \mathbf {n} )\mathbf {n} }$
is the component of ${\displaystyle \mathbf {a} -\mathbf {p} }$ perpendicular to the line. The distance from the point to the line is then just the norm of that vector.^[10] This more general formula
is not restricted to two dimensions.
Another vector formulation
If the line (l) goes through point A and has a direction vector ${\displaystyle {\vec {u}}}$ , the distance between point P and line (l) is
${\displaystyle d(\mathrm {P} ,(l))={\frac {\left\|{\overrightarrow {\mathrm {AP} }}\times {\vec {u}}\right\|}{\|{\vec {u}}\|}}}$
where ${\displaystyle {\overrightarrow {\mathrm {AP} }}\times {\vec {u}}}$ is the cross product of the vectors ${\displaystyle {\overrightarrow {\mathrm {AP} }}}$ and ${\displaystyle {\vec {u}}}$ and
where ${\displaystyle \|{\vec {u}}\|}$ is the vector norm of ${\displaystyle {\vec {u}}}$ .
Note that cross products only exist in dimensions 3 and 7 and trivially in dimensions 0 and 1 (where the cross product is constant 0).
See also
Further reading
• Deza, Michel Marie; Deza, Elena (2013), Encyclopedia of Distances (2nd ed.), Springer, p. 86, ISBN 9783642309588 | {"url":"https://www.knowpia.com/knowpedia/Distance_from_a_point_to_a_line","timestamp":"2024-11-12T05:46:54Z","content_type":"text/html","content_length":"209852","record_id":"<urn:uuid:81ad0c4c-e1ea-4c39-bb48-e32b4c72c065>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00253.warc.gz"} |
What our customers say...
Thousands of users are using our software to conquer their algebra homework. Here are some of their experiences:
Absolutely genius! Thanks!
R.B., Kentucky
Great stuff! As I was doing an Internet search on effective math software, I came across the Algebrator Web site. I decided to purchase the software after seeing the online demo and never regretted
it. Thanks!
Hillary Sill, VI.
My former algebra tutor got impatient whenever I couldnt figure out an equation. I eventually got tired of her so I decided to try the software. Im so impressed with it! I cant stress enough how
great it is!
Lee Wyatt, TX
I am so far quite pleased with the program
Perry Huges, KY
Search phrases used on 2009-05-09:
Students struggling with all kinds of algebra problems find out that our software is a life-saver. Here are the search phrases that today's searchers used to find our site. Can you find yours among
• what is the PTAAS test in texas for?
• Multiplying and dividing odd square roots
• exponent pizzazz worksheet
• Lesson Plans Dividing Decimals
• Worksheet Solving Quadratic Functions
• percentage formulas
• radical math help answers
• 7th grade math homework cheats
• mathematic worksheets for 6th graders
• ti-83 physics program yourself
• leaner equations
• math prentice hall grade 8 help
• math trivia for kids
• expert lcm worksheet
• Factor as the sum or difference between two cubes
• free download mathimatical formulas
• formula within solver excel
• math-how to find percentages in grading math problem
• the hardest math problem in the world
• adding and subtracting integers printables
• algebra calculator that finds square root
• practice worksheet for components of a sentence
• polynomial java
• online equation solver with steps
• calculator decimal to root
• factorial formula for triangular numbers
• Prentice Hall Algebra book answers
• "grade 8" + "math" + "statistics and probability worksheets"
• depreciating value calculations webmath
• system of exponential equation calculator
• "how to factor trinomial"
• free printable 5th grade geometric concepts practice test
• algebra cheat sheet
• free apptitude ebook
• maths for class7 exponent exercise
• adding radical expression calculator
• prime factorization to reduce fractions worksheet
• Free algebra 2 answers
• exponent problems/printables
• printable maths practice paper for 3rd grade
• Standard form to vertex form
• online polynomial solution
• quadratic formula poems
• math algebra one solver
• trivia of multiplication problem solving
• balancing method of solving linear equations
• aptitude questions pdf
• how to factor out polynomials
• texas instruments T1-84 games instructions
• square roots algabra practice worksheets
• Where an I get answers to the harcourt science tests
• pre algebra ppt
• variable expression lesson plans
• ucsmp advanced algebra scott, foresman and company test b answers
• ti 83 plus game source
• prealgerbra worksheets
• beginners algebra
• highschool math, completing the square
• addig and subtracting negative numbers
• grade 9 proportion worksheets
• easy calculator graph pictures
• mathmatics notation ln
• algebraic expressions worksheets
• maths tests for ks3
• college algrebra
• differential exponential functions and online graphing calculators
• how do i solve for the range and domain of a hyperbola
• least squares problems maple
• algebra help
• algebra 2 and trigonometry littell mcdougal picture
• texas instruments radical expressions
• square root lists through 100
• rule convert mixed number to decimals
• math permutation and combination
• the difference between plotting the two different types of compound inequality ("and" versus
• simultaneous exponential equation solver
• free online maths test ks3
• math lesson plan, 1st grade, california standards
• what is the common factors of 24 and 26
• ti-83 plus "rom image" download
• glencoe and algebra 2 and master for teachers
• function machine printables
• printable pictograph worksheets for elementary math
• radical expressions solver
• maths exam sheet for year 8
• formulas ch. 11 geometry mcdougal
• answers to practice workbook Glencoe Algebra 1 | {"url":"https://softmath.com/algebra-help/writing-expression-radical-sig.html","timestamp":"2024-11-11T13:49:37Z","content_type":"text/html","content_length":"35274","record_id":"<urn:uuid:1a757c0e-43e2-4b6b-a7d6-3855c38062c9>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00196.warc.gz"} |
A force acting on a particle moving in the xy plane is given by... | Filo
Question asked by Filo student
A force acting on a particle moving in the plane is given by where is in newtons and and are in meters. The particle moves from the origin to a final position having coordinates and as shown in
Figure Calculate the work done by on the particle as it moves along (a) the purple path, (b) the red path, and (c) the blue path. (d) Is conservative or nonconservative? (e) Explain your answer to
part (d).
Not the question you're searching for?
+ Ask your question
Key Concepts: Force, Work, Conservative Force Explanation: We can calculate the work done by the force on the particle using the line integral of force along the given paths. Additionally, we can
determine if the given force is conservative or not by checking if it satisfies the conditions for conservative forces, where the work done only depends on the initial and final positions, and not on
the path taken between them. Step by Step Solution: Step 1. Calculate the work done on the particle along the purple path Step 2. Calculate the work done on the particle along the red path Step 3.
Calculate the work done on the particle along the blue path Step 4. Determine if the force is conservative or nonconservative Step 5. Explain the reasoning behind the determination in step4 Final
Answer: a) \n b) \n c) \n d) Nonconservative force \n e) The force is nonconservative because the work done by it depends on the path taken as shown by the different values of work along the
different paths.
Found 7 tutors discussing this question
Discuss this question LIVE
15 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice more questions on All topics
View more
Students who ask this question also asked
View more
Stuck on the question or explanation?
Connect with our Physics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
A force acting on a particle moving in the plane is given by where is in newtons and and are in meters. The particle moves from the origin to a final position having coordinates and as shown
Question in Figure Calculate the work done by on the particle as it moves along (a) the purple path, (b) the red path, and (c) the blue path. (d) Is conservative or nonconservative? (e) Explain your
Text answer to part (d).
Updated Feb 8, 2024
Topic All topics
Subject Physics
Class Class 11
Answer Text solution:1 | {"url":"https://askfilo.com/user-question-answers-physics/a-force-acting-on-a-particle-moving-in-the-plane-is-given-by-36383139353338","timestamp":"2024-11-14T07:19:52Z","content_type":"text/html","content_length":"312822","record_id":"<urn:uuid:a0c4b2f1-79e8-45c8-bb47-db98bbc6d9cf>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00650.warc.gz"} |
Back to Papers Home Back to Papers of School of Physics
Paper IPM / P / 15144
School of Physics
Title: On Complexity Growth with Lifshitz Scaling and Hyperscaling Violation
Author(s): 1. M. Alishahiha
2. A. Faraji Astaneh
3. M.R. Mohammadi Mozaffar
4. A. Mollabashi
Status: Published
Journal: JHEP
Vol.: 07
Year: 2018
Pages: 042
Supported by: IPM
Using complexity=action proposal we study the growth rate of holographic complexity for Lifshitz and hyperscaling violating geometries. We will consider both one and two sided black branes in an
Einstein-Maxwell-Dilaton gravitational theory. We find that in either case Lloyd's bound is violated and the rate of growth of complexity saturates to a value which is greater than twice the mass of
the corresponding black brane. This value reduces to the mass of the black brane in the isotropic case. We show that in two sided black brane the saturation happens from above while for one sided
black brane it happens from below.
Download TeX format
back to top | {"url":"https://www.ipm.ac.ir/ViewPaperInfo.jsp?PTID=15144&school=Physics","timestamp":"2024-11-08T05:35:02Z","content_type":"text/html","content_length":"41934","record_id":"<urn:uuid:41e5b422-c06d-4d93-99f1-a46847888a29>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00603.warc.gz"} |
Lectures on Finite Fieldssearch
Item Successfully Added to Cart
An error was encountered while trying to add the item to the cart. Please try again.
Please make all selections above before adding to cart
Lectures on Finite Fields
Hardcover ISBN: 978-1-4704-4289-7
Product Code: GSM/190
List Price: $135.00
MAA Member Price: $121.50
AMS Member Price: $108.00
eBook ISBN: 978-1-4704-4727-4
Product Code: GSM/190.E
List Price: $85.00
MAA Member Price: $76.50
AMS Member Price: $68.00
Hardcover ISBN: 978-1-4704-4289-7
eBook: ISBN: 978-1-4704-4727-4
Product Code: GSM/190.B
List Price: $220.00 $177.50
MAA Member Price: $198.00 $159.75
AMS Member Price: $176.00 $142.00
Click above image for expanded view
Lectures on Finite Fields
Hardcover ISBN: 978-1-4704-4289-7
Product Code: GSM/190
List Price: $135.00
MAA Member Price: $121.50
AMS Member Price: $108.00
eBook ISBN: 978-1-4704-4727-4
Product Code: GSM/190.E
List Price: $85.00
MAA Member Price: $76.50
AMS Member Price: $68.00
Hardcover ISBN: 978-1-4704-4289-7
eBook ISBN: 978-1-4704-4727-4
Product Code: GSM/190.B
List Price: $220.00 $177.50
MAA Member Price: $198.00 $159.75
AMS Member Price: $176.00 $142.00
• Graduate Studies in Mathematics
Volume: 190; 2018; 240 pp
MSC: Primary 11
The theory of finite fields encompasses algebra, combinatorics, and number theory and has furnished widespread applications in other areas of mathematics and computer science. This book is a
collection of selected topics in the theory of finite fields and related areas. The topics include basic facts about finite fields, polynomials over finite fields, Gauss sums, algebraic number
theory and cyclotomic fields, zeros of polynomials over finite fields, and classical groups over finite fields. The book is mostly self-contained, and the material covered is accessible to
readers with the knowledge of graduate algebra; the only exception is a section on function fields. Each chapter is supplied with a set of exercises. The book can be adopted as a text for a
second year graduate course or used as a reference by researchers.
Graduate students and researchers interested in number theory, in particular, the theory of finite fields.
□ Chapters
□ Preliminaries
□ Polynomials over finite fields
□ Gauss sums
□ Algebraic number theory
□ Zeros of polynomials over finite fields
□ Classical groups
• Permission – for use of book, eBook, or Journal content
• Book Details
• Table of Contents
• Additional Material
• Requests
Volume: 190; 2018; 240 pp
MSC: Primary 11
The theory of finite fields encompasses algebra, combinatorics, and number theory and has furnished widespread applications in other areas of mathematics and computer science. This book is a
collection of selected topics in the theory of finite fields and related areas. The topics include basic facts about finite fields, polynomials over finite fields, Gauss sums, algebraic number theory
and cyclotomic fields, zeros of polynomials over finite fields, and classical groups over finite fields. The book is mostly self-contained, and the material covered is accessible to readers with the
knowledge of graduate algebra; the only exception is a section on function fields. Each chapter is supplied with a set of exercises. The book can be adopted as a text for a second year graduate
course or used as a reference by researchers.
Graduate students and researchers interested in number theory, in particular, the theory of finite fields.
• Chapters
• Preliminaries
• Polynomials over finite fields
• Gauss sums
• Algebraic number theory
• Zeros of polynomials over finite fields
• Classical groups
Permission – for use of book, eBook, or Journal content
You may be interested in...
Please select which format for which you are requesting permissions. | {"url":"https://bookstore.ams.org/GSM/190","timestamp":"2024-11-12T05:39:32Z","content_type":"text/html","content_length":"94662","record_id":"<urn:uuid:6f8ecf3b-91cf-4b3e-98aa-df3a1ea7465b>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00258.warc.gz"} |
Angles of Intersecting Lines in a Circle
Lesson Video: Angles of Intersecting Lines in a Circle Mathematics • Third Year of Preparatory School
In this video, we will learn how to find the measures of angles resulting from the intersection of two chords, two secants, two tangents, or tangents and secants in a circle.
Video Transcript
Angles of Intersecting Lines in a Circle
In this video, we will learn how to find the measures of angles resulting from the intersection of two chords, two secants, two tangents, or a tangent and a secant in a circle. To do this, letβ s
start with the case of two chords intersecting inside of a circle. Here, weβ ve drawn the chord π ΄π ΅ intersecting the chord π Άπ ·. Weβ ll call the point of intersection π Έ.
We want to find an expression for one of the angles between these two chords. Letβ s try and find an expression for the measure of the angle π ΅π Έπ Ά. To do this, we start by joining the points
π ΄ and π Ά, giving us a triangle π ΄π Έπ Ά. The first thing we note is angles π ΄π Έπ Ά and π ΅π Έπ Ά make up a straight line. So the sum of their measures is 180 degrees.
Next, we also have that π ΄π Έπ Ά is a triangle. So the sum of the measures of the internal angles of this triangle will be 180 degrees. Both of the expressions on the left-hand side of the
equation are equal to 180 degrees. So the left-hand side of both equations must be equal. And if we remove the measure of angle π ΄π Έπ Ά from both sides of this equation, we have the measure of
angle π ΅π Έπ Ά must be equal to the measure of angle π ΄π Άπ Έ plus the measure of angle π Έπ ΄π Ά.
Another way of thinking about this is both sides of the equation add to the measure of angle π ΄π Έπ Ά to make 180 degrees. We can rewrite this equation even further. First, letβ s take a look at
angle π ΄π Άπ Έ. We can see that angle π ΄π Άπ Έ is exactly the same as angle π ΄π Άπ ·. Itβ s on the circumference of our circle. In particular, this angle is subtended by the minor arc
from π ΄ to π ·. And we recall, whenever this happens, this means the measure of the angle will be one-half the measure of the arc. The measure of angle π ΄π Άπ Έ is one-half the measure of the
arc π ΄π ·. We can then do exactly the same for our other angle, angle π Έπ ΄π Ά. This time, the minor arc from π ΅ to π Ά subtends this angle. Therefore, the measure of angle π Έπ ΄π Ά is
equal to one-half the measure of the arc π ΅π Ά.
We can substitute both of these expressions for the angles into our equation. This gives us the measure of angle π ΅π Έπ Ά is one-half the measure of arc π ΄π · plus one-half the measure of arc
π ΅π Ά. We can then take out a factor of one-half to get the following equation. The measure of angle π ΅π Έπ Ά is one-half the sum of the measure of arc π ΄π · and the measure of arc π ΅π
Another way of thinking about this is weβ re taking the average of the measures of the two arcs opposite our angle in the circle. And in exactly the same way, we can find an expression for one of
the other angles at point π Έ. In exactly the same way, the measure of the angle will be one-half the sum of the measures of the arcs opposite the angle. The measure of angle π ΄π Έπ Ά is equal
to one-half times the measure of arc π ΄π Ά plus the measure of arc π ΅π ·.
We can write this result formally as follows. If two chords π ΄π ΅ and π Άπ · in a circle meet at a point π Έ, then the measure of either angle between the two chords is half the sum of the
measures of the arcs opposite the angle, giving us the following two formulas. The measure of angle π ΅π Έπ Ά is one-half the measure of arc π ΄π · plus the measure of arc π ΅π Ά. And the
measure of angle π ΄π Έπ Ά is one-half the measure of arc π ΄π Ά plus the measure of arc π ΅π ·. Letβ s see an example of applying this result to find the measure of an angle between two
chords in a circle.
Find π ₯.
In this question, weβ re asked to find the value of π ₯. And we can see that π ₯ is the angle between two chords in our circle. Thatβ s the chord π ΄π ΅ and the chord π Άπ ·. We can recall
the following fact. The angle between two chords in a circle is one-half the sum of the measures of the arcs opposite the angle. And in our diagram, weβ re given the measures of both of the arcs
opposite our angle π ₯. Thatβ s the arc π ΄π Ά, which has measure 73 degrees, and the arc π ·π ΅, which has measure 133 degrees. So by applying this result, we must have that π ₯ is equal to
one-half times 73 degrees plus 133 degrees. We can then evaluate this expression. 73 plus 133 is 206, and then one-half of this is 103 degrees.
Therefore, we were able to show that if π ₯ is the angle shown in the diagram, π ₯ is equal to 103 degrees.
We can follow a very similar method to our last proof to help us find an angle between two secant lines which intersect outside of a circle, where we remember a secant line is the line extension of a
chord. For example, letβ s consider the following diagram, which has the secant line π ΄π ΅ and the secant line π Άπ ·, which intersect at point π Έ. We want to find an expression for the
measure of angle π ΅π Έπ ·. To do this, weβ ll once again create a triangle. This time, weβ ll create a triangle by connecting the points π ΄ and π · with a line.
We can follow the exact same method we did in the last proof to find an expression for the measure of angle π ΅π Έπ ·. First, angle π Άπ ·π ΄ and angle π ΄π ·π Έ make up a straight line, so
their sum is 180 degrees. And we also know the sum of the measures of the internal angles of the triangle π ΄π ·π Έ will also add to 180 degrees. So the measure of angle π ΄π Έπ · plus the
measure of angle π ·π ΄π Έ plus the measure of angle π ΄π ·π Έ is equal to 180 degrees. Both of these expressions are equal to 180 degrees. So we can equate the left-hand side of both
equations. And we can also note that the left-hand side of the equation both have a term measure of angle π ΄π ·π Έ, so we can remove this. This gives us the measure of angle π ΄π Έπ · plus the
measure of angle π ·π ΄π Έ is equal to the measure of angle π Άπ ·π ΄.
One way of seeing this is that both sides of our equation add to the measure of angle π ΄π ·π Έ to give us a value of 180 degrees. We want to find an expression for the measure of angle π ΄π Έπ
·. So weβ ll subtract the measure of angle π ·π ΄π Έ from both sides of the equation. This gives us that the measure of angle π ΄π Έπ · is the measure of angle π Άπ ·π ΄ minus the measure
of angle π ·π ΄π Έ.
Finally, we can find expressions for both of these angles since both of these angles are subtended by arcs in our circle. First, angle π ·π ΄π Έ is subtended by the arc from π ΅ to π ·. Second,
angle π Άπ ·π ΄ is subtended by the arc from π ΄ to π Ά. And we recall that inscribed angles are one-half the measure of the arcs that they are subtended by. Therefore, the measure of angle π
Άπ ·π ΄ is one-half the measure of the arc from π ΄ to π Ά and the measure of angle π ·π ΄π Έ is one-half the measure of the arc from π ΅ to π ·. And the difference between these two values
is the measure of angle π ΄π Έπ ·.
Finally, we can take out the factor of one-half to get the measure of angle π ΄π Έπ · is one-half the measure of arc π ΄π Ά minus the measure of arc π ΅π ·. We can write this more formally as
follows. If π ΄π ΅ and π Άπ · are secants which intersect at a point π Έ outside of our circle, then the measure of the angle between the two secants is one-half the positive difference between
the measures of both arcs intercepted by the sides of the angle. In other words, the measure of angle π ΄π Έπ · is one-half the positive difference of the measure of arc π ΄π Ά and the measure
of arc π ΅π ·. Letβ s see an example of how we can use this property to determine the angle between two secants which intersect outside of a circle.
Find π ₯.
In this question, weβ re asked to find the value of π ₯. And we can see in our diagram that π ₯ is the angle between two secant lines which intersect outside of our circle. And we can find the
measure of π ₯ by recalling the following fact. The angle between two secant lines in a circle which intersect outside of a circle is one-half the positive difference of the measures of the arcs
intercepted by the sides of the angle.
To apply this property, letβ s do this step by step. First, letβ s mark the sides of the angle of π ₯. We can see that π ₯ is the angle between the lines π ΄π Ά and π ΄π Έ. So the two sides
of our angle are the line segment π ΄π Ά and the line segment π ΄π Έ. Next, we need to find the measures of the arcs intercepted by the two sides of our angle. The first side of our angle
intersects the circle at the point π ΅, and the second side of our angle intersects the circle at the point π ·. So one of the arcs weβ re going to use is the arc from π ΅ to π ·. Similarly, the
first side of our angle intercepts the circle at the point π Ά, and the second side of our angle intercepts the circle at the point π Έ. So the other arc weβ re interested in is the arc from π Ά
to π Έ.
Finally, the measure of our angle will be one-half the positive difference between the measures of these two arcs. And since the arc from π Ά to π Έ is bigger than the arc from π ΅ to π ·, this
gives us the following result. π ₯ will be equal to one-half multiplied by the measure of arc π Άπ Έ minus the measure of arc π ΅π ·. And weβ re given both of these values in the diagram. The
measure of arc π Άπ Έ is 132 degrees, and the measure of arc π ΅π · is 36 degrees. So we substitute these values into our formula. We get that π ₯ is equal to one-half multiplied by 132 degrees
minus 36 degrees. And we can then evaluate this expression. 132 minus 36 is equal to 96. And if we multiply this by one-half, we get 48. Therefore, π ₯ is equal to 48 degrees.
Therefore, we were able to find the value of π ₯ in the given diagram. It was one-half the difference between the measure of arc π Άπ Έ and the measure of arc π ΅π ·, which was 48 degrees.
Letβ s now consider how we would find the angle between two tangent lines to a circle which intersect at a point outside of the circle. For example, letβ s consider the following tangent lines
which intersect at the point π Ά. We want to determine the measure of angle π ΄π Άπ ΅. If we call the center of our circle π , then π π ΄π Άπ ΅ is a quadrilateral. And the internal angles
of a quadrilateral add to 360 degrees.
So the measure of angle π plus the measure of angle π ΄ plus the measure of angle π Ά plus the measure of angle π ΅ is equal to 360 degrees. Since π ΄ and π ΅ are the points of tangency and π
is the center of our circle, the angle at π ΄ and the angle at π ΅ will be right angles. So these are both equal to 90 degrees. So we can subtract 180 degrees from both sides of the equation to
get the measure of angle π plus the measure of angle π Ά is equal to 180 degrees. And we can see in our diagram that angle π is the central angle of a circle and is subtended by the arc from π
΄ to π Ά. And we know the measure of an arc is equal to the measure of its central angle. So in this case, the measure of angle π is equal to the measure of the arc from π ΄ to π ΅.
We can then substitute this into our equation and then rearrange to find an expression for the measure of angle π Ά. We get that the measure of angle π Ά is equal to 180 degrees minus the measure
of the arc from π ΄ to π ΅. And we can also write the result weβ ve just proven formally as follows. If two tangents to a circle at points π ΄ and π ΅ intersect at a point π Ά, then the measure
of the angle between the tangents is 180 degrees minus the measure of the arc between the two points of tangency. The measure of angle π Ά is equal to 180 degrees minus the measure of the arc from π
΄ to π ΅.
Letβ s now see an example where we use this property to determine the angle between two tangent lines of a circle which intersect at a point outside of the circle.
Find π ₯.
In this question, weβ re asked to find the value of π ₯. And we can see that π ₯ is the angle between two tangent lines to our circle. Thatβ s the line from π ΄ to π Ά and the line from π ΄ to
π ΅. They just touch the circle at a single point, so these are tangent lines. And we can find the value of π ₯ by recalling the following property for the angle between two tangent lines which
intersect at a point outside of our circle.
We recall the angle between two tangent lines which intersect at a point is 180 degrees minus the measure of the arc between the two points of tangency. In our diagram, the points of tangency are the
points π ΅ and π Ά. And the arc between π ΅ and π Ά will be the minor arc shown. And we know the measure of this arc; its measure is 151 degrees. Then, our property tells us that the value of π
₯ is equal to 180 degrees minus the measure of arc π ΅π Ά. So we can substitute the measure of arc π ΅π Ά, being 151 degrees, to get π ₯ is equal to 180 degrees minus 151 degrees, which we can
calculate is 29 degrees.
Therefore, by using the fact that the angle between two tangent lines which intersect at a point outside of a circle is 180 degrees minus the measure of the arc between the two points of tangency, we
were able to show that π ₯ is equal to 29 degrees.
Finally, letβ s try and find the angle between a tangent line and a secant line which intersect outside of a circle. In this diagram, the tangent line is π ΄π · and the secant line is π Άπ ΅.
And we want to find the measure of the angle π ΄π ·π ΅. Weβ ll do this by using a very similar method to the last three proofs. Weβ ll start by connecting π ΄ and π ΅ to construct a triangle π
΄π ΅π ·. We see that angle π Άπ ΅π ΄ and angle π ΄π ΅π · are on a straight line, so their measures add to 180 degrees. So we have the measure of angle π Άπ ΅π ΄ plus the measure of angle
π ΄π ΅π · is equal to 180 degrees.
We then also have that the sum of the measures of the internal angles in a triangle add to 180 degrees. So we have the measure of angle π ΅π ·π ΄ plus the measure of angle π ΅π ΄π · plus the
measure of angle π ΄π ΅π · is equal to 180 degrees. And now we have two different expressions which when added to the measure of angle π ΄π ΅π · is equal to 180 degrees. So these two
expressions must be equal. The measure of angle π Άπ ΅π ΄ is equal to the measure of angle π ΅π ·π ΄ added to the measure of angle π ΅π ΄π ·.
We can subtract the measure of angle π ΅π ΄π · from both sides to find an expression for the measure of angle π ΅π ·π ΄. We have the measure of angle π ΅π ·π ΄ is equal to the measure of
angle π ΅π ΄π · minus the measure of angle π Άπ ΅π ΄. We can find an expression for the measure of angle π ΅π ΄π · by first adding the following two radii to our diagram. And then weβ ll
use the fact that the measure of the internal angles of quadrilateral π π ΄π ·π ΅ add to 360 degrees. Since π ΄ is a point of tangency for our tangent line, angle π π ΄π · is a right angle.
So the sum of the internal angles of this quadrilateral β the measure of angle π ΄π π ΅ plus 90 degrees plus the measure of angle π ΅π ·π ΄ plus the measure of angle π ·π ΅π β is
equal to 360 degrees.
We know the measure of the central angle π ΄π π ΅ will be equal to the measure of the arc π ΄π ΅. So we can substitute this into our expression to give us the following. And by considering the
internal angles of triangle π ΄π ΅π ·, the internal angles sum to 180 degrees. So the measure of angle π ΄π ΅π · is 180 degrees minus the sum of the other two angles, the measure of angle π ΅π
΄π · and the measure of angle π ΅π ·π ΄. Finally, since π π ΄ and π π ΅ are radii, this means π π ΄π ΅ is an isosceles triangle. Therefore, the measure of angle π π ΄π ΅ and the
measure of angle π π ΅π ΄ are equal. In particular, since angle π π ΄π · is a right angle, we have the measure of angle π π ΄π ΅ is equal to 90 minus the measure of angle π ΅π ΄π ·.
Now, all we need to use is the fact that the measure of angle π ·π ΅π is the sum of the measure of angle π ΄π ΅π · and the measure of angle π π ΄π ΅. We would substitute these into our
expression and then simplify. And we would be able to find the following result. The measure of angle π ΅π ΄π · is one-half the measure of the arc from π ΄ to π ΅. To do this, letβ s clear some
space and go back to the following equation.
We can find an expression for the measure of angle π Άπ ΅π ΄ from our diagram. Angle π Άπ ΅π ΄ is subtended by the major arc from π ΄ to π Ά. And the measure of an inscribed angle is one-half
the measure of the arc itβ s subtended by. So this is one-half the measure of the arc from π ΄ to π Ά. We can substitute our expression for the measure of angle π ΅π ΄π ·, giving us the
following equation, which we can rearrange for the measure of angle π ΅π ·π ΄, which gives us the following. The measure of angle π ΅π ·π ΄ is one-half the measure of the major arc from π ΄ to
π Ά minus the measure of the arc from π ΄ to π ΅.
An easy way to remember this is the measure of the angle is one-half the difference of the measures of the two arcs intercepted by the sides of the angle. And of course we take the positive value for
this difference.
Before we finish, thereβ s one more property we can show. Weβ ve already shown the measure of the angle between two tangents of a circle which meet at a point is 180 degrees minus the measure of
the minor arc between the two points of tangency. We can relate this to our other results by considering the measure of the other arc; letβ s call this π ¦. These two arcs make up a full circle, so
the sum of their measures is 360 degrees. Subtracting π ₯ from both sides of the equation gives us π ¦ is 360 degrees minus π ₯. And we want to use this to consider one-half the difference between
these two arcs. Thatβ s one-half π ¦ minus π ₯.
Weβ ll substitute this expression for π ¦ into one-half the difference. This gives us one-half 360 minus π ₯ minus π ₯, which if we simplify is 180 degrees minus π ₯, which by using our first
result is the measure of angle π ΄π Άπ ΅. In other words, we can also think of the measure of the angle between two tangents which meet outside of a circle as one-half the difference between the
two arcs between the points of tangency.
Letβ s go over the key points of this video. First, we saw if two chords intersect at a point in the circle, then the measure of the angle between the two chords is half the sum of the measures of
the two arcs opposite the angle. Next, we saw if two secants, two tangents, or a secant and a tangent intersect at a point outside of a circle, then the measure of the angle between them is half the
positive difference between the measures of both arcs intercepted by the sides of the angle. Finally, we saw the measure of the angle between two tangents which intersect outside of a circle is 180
degrees minus the measure of the minor arc between the two points of tangency. | {"url":"https://www.nagwa.com/en/videos/757135434315/","timestamp":"2024-11-06T08:37:26Z","content_type":"text/html","content_length":"292972","record_id":"<urn:uuid:7df51972-f702-4ae7-b559-6a71283aaced>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00517.warc.gz"} |
Two Halves
Solving problems
Children often enjoy finding things that are the same.
Adults could ask children to find half of lots of different things, quantities and collections.
The Activity
With a playdough cookie, pose a story problem about having to share it with a friend. How could you do this?
Cut or break it into two pieces and keep the bigger 'half' yourself. Ask the children what they think about this.
Present a range of materials such as paper shapes, string and bananas. Challenge children to halve them and then discuss and display the results.
Encouraging mathematical thinking and reasoning:
What do your halves look like?
How did you make the halves?
How do you know they are halves?
How can you check they are the same size?
What can you do if you get the wrong number? What if you don't have enough or have too much?
Opening Out
What if you have to halve a box of four cakes? A collection of pennies?
What if you have to halve a length of gold ribbon? A bottle of drink?
Is there another way to fold a square of paper in half?
Can you put something on paper to show what your halves look like?
Can you put something on paper to show how you know that these are halves?
How do you write half?
The Mathematical Journey
Counting and cardinality:
• using counting to check that both amounts are the same
Matching numerals and amounts:
• selecting numerals to match the numbers involved
• predicting the result of taking away one number from another
• using the inverse addition facts, e.g. "Half of ten is five, because five and five make ten"
• understanding that dividing by two results in two equal parts - explaining this as 'same' or 'fair', or justifying by matching amounts
• awareness of aspects such as length, volume, weight, area
• comparing by estimating or directly, or using measuring tools such as identical containers or balance scales
• explaining how they know that the halves are the same amount
Development and Variation
Encourage children to find halves of:
• 2D shapes (area) by drawing lines or folding paper.
• 3D shapes (volume) by cutting e.g. fruit and playdough.
• Lengths e.g. ribbons, strips of paper - folding and cutting.
• Weights e.g. playdough - checking with balance scales.
• Volumes e.g. water - pouring into two identical containers.
• Numbers of items e.g. pennies, jewels, pegs on pegboards.
• Numbers of structured materials e.g. Unifix sticks, Numicon, Cuisenaire.
Make a display of halves.
Where does half go on the number line? Make an 'ages' line from birth to 20, for children and siblings, to include half years e.g. one and a half, four and a half...
Use tablets to halve pictures, and use mirrors to halve (and double) pictures.
• 2D shapes, folding paper
• Ribbons
• Playdough, knives, scales
• Water, jugs, identical containers, mirrors
• Items e.g. pennies, jewels, pegs on pegboards
• Structured materials e.g. Unifix sticks, Numicon, Cuisenaire
Download a PDF of this resource.
Acknowledgements: Claire Christie, Annabel Bennet | {"url":"https://nrich.maths.org/eyfs-activities/two-halves","timestamp":"2024-11-09T04:32:00Z","content_type":"text/html","content_length":"43590","record_id":"<urn:uuid:79999560-3554-4db7-850a-edfc8a096379>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00598.warc.gz"} |
GSoC 2016
Google Summer of Code is a highly enjoyable and rewarding way to spend a summer.
→ SageMath's GSoC page to submit your propsoal ←
SageMath (or Sage for short) is a GPL open-source mathematical software system. It is designed to be not just a computer algebra system, but more like a complete environment for doing mathematics and
related calculations. It is based on a vast collection of existing open-source software tools and libraries and ties them together via Python. Python is also the primary interface language for the
user and its object-oriented way of expressing concepts is used to express calculations - of course, there are also many “normal” functions
Sage works hand-in-hand with other computational mathematics software systems, such as SymPy, GAP, etc, and can serve as an umbrella organization for GSOC projects for those sister projects.
All projects will start with an introduction phase to learn about Sage’s (or sister projects') internal organization and to get used to their established development process.
For Sage, this is documented in the developers' manual and all students will be instructed by the mentors on how to get their hands dirty. Sage uses Git (accessible at http://git.sagemath.org/) for
revision control and trac (accessible at http://trac.sagemath.org) for organizing development and code review. Our license is GPLv2+. Feel free to contact Mentors before you send us project
require you to show us that you are able to execute actual development by submitting a patch via Sage's trac (i.e. see tickets marked for beginners) or a similar development tool of the respective
For Sage, feel free to introduce yourself and your project idea in Sage's GSOC mailing list.
For GAP, feel free to introduce yourself to GAP's developer list. Some discussion of possible GAP GSOC projects is happening at the joint GAP Sage days in St Andrews, see the agenda.
To get a better feeling of how Sage works, please check out the developer guide.
There is also a comprehensive list of future feature wishes in our trac issue tracker. They might contain the perfect project idea for you we didn't even think about!
Application Template
Please use this application template, in particular answer the questions thoroughly enough to convince us to pick you!
• Name
• Contact Information (email, instant messaging, …)
• Location/Timezone
• University
• What are your technical skills, education, experience, etc. Especially make sure to explain with what level of mathematics you are comfortable with and on what level you would like to program.
• Who are you? What makes you the best person to work on this particular project? Your personal motivation?
• What platform and operating-system are you using on your computer? (Sage development is done best on Linux and OSX)
• Are you or have you been engaged in other open-source projects?
• Do you code on your own pet projects?
• Are you a Sage user, how long do you know Sage?
• Title, Project Synopsis: a short description and summary of its aim and scope.
• What is your personal involvement or relationship with your proposed project?
• Details: describe all the details and explain modules or parts of your whole project. Break down the whole project into individual tasks - as good as possible - and describe deliverable and
quantifiable results for each of them. It also helps if you have already discussed this with a possible mentor.
• Schedule: A timetable, including special circumstances like exams or holidays, for the individual tasks.
• Risk Management: Try to anticipate potential problems and explain, how to mitigate them. Propose alternative scenarios, if a particular milestone isn't reached, to still successfully complete the
Project Ideas
Hyperplane arrangements
Mentor Miguel Marco / Volker Braun
Difficulty Medium
Skills standard Python knowledge, good mathematical knowledge about hyperplane arrangements
Sage already has a module for hyperplane arrangements, but it only admits arrangements over the rationals and finite fields. Since there is a rich theory about complex arrangements, it would be
interesting to extend this to other fields. It would probably require to redesign the classes, having one common base class and further classes for field-specific methods. In this setting, it would
also make sense to implement invariants as the Orlik-Solomon/Orlik-Terao algebras, resonance varieties, fundamental group of complements, logarithmic derivation modules (with the corresponding Betti
Wrap/Expose more functionalities from Singular
Mentor Miguel Marco / Travis Scrimshaw
Difficulty Medium
Skills Ability to work with the category/parent/element framework, some basic understanding of commutative algebra objects
We ship Singular, which is used mainly for computations with multivariate polynomial rings and their ideals (mostly Gröbner basis). We could also take advantage of its capabilities to deal with
modules, resolutions... In order to do so, we would need to write some wrapping classes for these objects and interface the corresponding Singular calls.
Implement a framework for non-free modules
Mentor Travis Scrimshaw
Difficulty Medium
Skills Ability to work with the category/parent/element framework, some linear algebra or understanding of modules, some knowledge of rings and ideals is preferable
There is very little capacity in Sage for non-free modules. We should implement generic functionality and base classes for non-free modules. This will likely have some overlap with the project to
expose more from Singular.
Combine common functionality between CombinatorialFreeModule and Sage's free module code
Mentor Travis Scrimshaw
Difficulty Medium--Hard
Skills Good understanding of OOP and adapter classes and basic linear algebra
Currently, there is some overlap between the implementation of CombinatorialFreeModule (CFM) and (sparse) FreeModule. In particular, a CFM is roughly a special indexing set on top of a sparse free
module. The goal of this project would be to combine features between these two class hierarchies in an attempt to ease the burden of code maintenance and improve the features of both.
Extending Matroid Theory functionality
Mentor Stefan van Zwam, Michael Welsh
Difficulty Medium
Skills Good knowledge of Python, SageMath, and some knowledge of Matroid Theory would be advantageous
The basic code for dealing with matroids in SageMath is fairly mature, but many enhancements are still desirable. Among those:
1. An improved catalog: This could be a warm-up task for the first two weeks. Enhance the matroid catalog with some options, such as adding options to prescribe the field of a representation.
2. Automorphisms: Very basic to implement (construct a graph out of the matroid, use the automorphism group code from graphs) but still missing.
3. Certificates: Many matroid test methods are currently True/False. In many cases, it makes sense to return a certificate of the claim, such as an isomorphism in case two matroids are determined to
be isomorphic.
4. Representability tests: Test if a given abstract matroid is quaternary, regular, ...
5. Framework for classes of representable matroids: This can take two directions. First, a parent-like class such as BinaryMatroids, which symbolically represents all binary matroids, and has
methods for extending, membership tests, etc. Second, a finite collection of matroids (such as all binary matroids without a P7-minor up to 9 elements), where each matroid stores information
about its allowed extensions and coextensions for faster generation and membership testing.
6. Faster minor testing: Testing whether a matroid has a specified minor is an important yet computationally expensive task. It is desirable to increase SageMath's ability for this. For binary
matroids, Jayant Apte's ticket is a good start, but needs to be brought up to standards of documentation. For other classes, ideas from Hlineny's Macek software can be used (pattern matching of
2x2 subdeterminants in the representable case).
7. Better plotting: Rank-3 matroids can be plotted, but the algorithm that positions the points can be improved. In particular, right now points are placed so that false collinearities appear.
8. Testing graphicness: Implement Cunningham's algorithm for this. SageMath's current implementation is fairly slow for larger matroids.
9. Trac tickets: Any of the issues on the SageMath Trac server. http://trac.sagemath.org/query?status=!closed&component=matroid+theory
Rank-metric codes
Mentor Johan S. R. Nielsen, David Lucas
Difficulty Medium
Skills Knowledge of abstract algebra (finite fields, field extensions, polynomial rings). Standard knowledge of Python. Familiarity with coding theory a plus.
Coding theory studies the encoding of data in ways that have certain auxiliary properties, such as error-correction capabilities. Rank-metric codes are a hot research topic, with applications in
packet-based network communication. Essentially, a rank metric code is a set of matrices, usually over a finite field, such that the difference of any two of them has high rank. By far the most
important construction is the Gabidulin code, which arise from the evaluation of skew polynomials.
This project is to implement Rank-metric codes in Sage, including Gabidulin codes and their decoding. Sage has good (and quickly expanding) support for linear codes using the Hamming metric, the most
common object in error-correcting code theory. The framework there should serve as inspiration. Implementing Gabidulin codes will also require support for skew polynomial rings, for which some work
has already been done in Sage and this should be followed through.
Generic Dispatcher
Mentor Vincent Delecroix
Difficulty Medium to very Hard
Skills good knowledge of Python and notions of Cython and C
In Sage there are many places where we can choose between several algorithms or underlying softwares to solve a problem. This is often related to the presence of the keyword algorithm or method in
functions. The aim of this task is to build a generic dispatcher that would choose depending on the parameters the fastest solution available. The solution must be very light and not affect
performance. The dispatch threshold must be static and computed through a dedicated command (like sage -recompute-thresholds). We could also have default threshold that depend on architectures. This
generic dispatcher could also be used to check coherency between the various implementations.
Note that it is different from what is called multimethods where the dispatch depends only on the input type. Here we consider a dispatcher that might also depend on the input values.
1. identify some Sage functions/methods that could benefit from the dispatcher
2. write a simple prototype of generic dispatcher adapted to 1 and intensively test it
3. release a first candidate for the dispatcher
4. more Sage functions coverage (this can be hard since some threshold might be extremly complicated to determine. In order to do that you are advised to ask for help on the mailing-list)
Regression test framework
Mentor Vincent Delecroix
Difficulty Medium
Skills good knowledge of Python and some elementary statistics
Sage currently does not provide tests for speed regression. In the past, we had some critical regressions and it would be natural to implement a general framework to deal with this. The project
should be implemented in Python and not be Sage specific. It might serve other purposes in the future.
1. Implement a Python program that runs some python code and stores the corresponding timing.
2. Design a large set of Sage tests (possibly asking for contribution on sage-devel)
3. Set up a server whose object would be to check for speed regression (or progression) of Sage code samples
Moduli space of dynamical systems
Mentor Ben Hutz
Difficulty Medium
Skills basic number theory, abstract algebra (groups, rings, and fields), basic algebraic geometry, python
There is functionality for working with dynamical systems over projective space in Sage. However, one of the areas lacking in functionality is the moduli space. We say two self-maps of projective
space are equivalent if there is an element of PGL that conjugates one to the other. The following two algorithms should be implemented.
1. Given two endomorphisms of projective space determine if they are conjugate. In other words, determine if they are in the same class in the moduli space. If they are, also return the PGL element
that conjugates one to the other. [Xander Faber, Michelle Manes, and Bianca Viray. Computing conjugating sets and automorphism groups of rational functions. Journal of Algebra 423 (2014),
2. Not all representations of a given moduli space class are equally 'nice'. There is already an algorithm implemented to return the minimal model (in terms of resultant), but the coefficients can
be non-optimal. Given an endomorphism of projective space compute a reduced form, i.e., a conjugation that makes the coefficients small. The simplest approach would be to "reduce" the binary form
describing the fixed points or (if that's too degenerate) the points of period n for some small n. [Stoll, Michael; Cremona, John E., On the reduction theory of binary forms. J. Reine Angew.
Math. 565 (2003), 79–99.]
If there is still time leftover, you can examine some applications of these two algorithms such as the concept of potential good reduction or an iterator of reduced minimal moduli elements of bounded
Modular abelian varieties
Mentors William Stein ([email protected]) and Hao Chen ([email protected])
Difficulty Advanced -- you must at least be a graduate student in math
Skills advanced algebraic number theory, modular forms, linear algebra
Implement as much as possible in Sage for modular abelian varieties that is in Magma, but which I never got around to rewriting for Sage. There are many key functions for working with modular abelian
varieties in Sage (starting with J0 and J1), which just raise NotImplementedError. They are available in Magma, but not Sage. Implement them and include them in Sage. See the last few days of my
class here. This is a pretty straightforward project, and you can't lose since it's just a matter of incrementally implementing things. However, it's absolutely critical that you already know how to
program and understand at least the foundations of the relevant advanced mathematics.
Please feel free to add ideas (or copy-paste them from last year's Sage GSOC wiki page). | {"url":"https://wiki.sagemath.org/GSoC/2016","timestamp":"2024-11-07T23:33:48Z","content_type":"text/html","content_length":"41077","record_id":"<urn:uuid:5a32aa64-7077-4457-af40-987fb27523da>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00031.warc.gz"} |
Direct products in communication complexity
We give exponentially small upper bounds on the success probability for computing the direct product of any function over any distribution using a communication protocol. Let suc(μ, f,C) denote the
maximum success probability of a 2-party communication protocol for computing the boolean function f(x, y) with C bits of communication, when the inputs (x, y) are drawn from the distribution μ. Let
μn be the product distribution on n inputs and fn denote the function that computes n copies of f on these inputs. We prove that if T log^3/2T ⋘ (C - 1) √ n and suc(μ, f,C) < 2 3 , then suc(μn, f^n,
T) ≤ exp(-Ω(n)). When μ is a product distribution, we prove a nearly optimal result: As long as T log ^2 T ⋘ Cn, we must have suc(μn, fn, T) ≤ exp(-Ω(n)).
Original language English
Title of host publication Proceedings - 2013 IEEE 54th Annual Symposium on Foundations of Computer Science, FOCS 2013
Pages 746-755
Number of pages 10
State Published - 2013
Externally published Yes
Event 2013 IEEE 54th Annual Symposium on Foundations of Computer Science, FOCS 2013 - Berkeley, CA, United States
Duration: 27 Oct 2013 → 29 Oct 2013
Publication series
Name Proceedings - Annual IEEE Symposium on Foundations of Computer Science, FOCS
ISSN (Print) 0272-5428
Conference 2013 IEEE 54th Annual Symposium on Foundations of Computer Science, FOCS 2013
Country/Territory United States
City Berkeley, CA
Period 27/10/13 → 29/10/13
Dive into the research topics of 'Direct products in communication complexity'. Together they form a unique fingerprint. | {"url":"https://cris.huji.ac.il/en/publications/direct-products-in-communication-complexity","timestamp":"2024-11-08T19:02:09Z","content_type":"text/html","content_length":"48148","record_id":"<urn:uuid:fda21bfa-fd42-425f-9339-d637f041b3f1>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00849.warc.gz"} |
Decomposition - math word problem (1223)
Make decomposition using prime numbers of number 206. The result write as prime factors (all, even multiple)
Correct answer:
Did you find an error or inaccuracy? Feel free to
write us
. Thank you!
You need to know the following knowledge to solve this word math problem:
Related math problems and questions: | {"url":"https://www.hackmath.net/en/math-problem/1223","timestamp":"2024-11-06T10:25:48Z","content_type":"text/html","content_length":"45545","record_id":"<urn:uuid:46b16e06-d63a-4dc7-a506-4597a832f018>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00469.warc.gz"} |
The surface area of a ball is measured to be A = 35 cm2.
a. (a)...
The surface area of a ball is measured to be A = 35 cm2. a. (a)...
The surface area of a ball is measured to be A = 35 cm2.
a. (a) Write an equation for the radius of the ball, r, treating it as a sphere, in terms of its surface area.
(b) The mass is measured to be M = 170 g. Calculate its density ρ in g/cm3. Part
(c) What is the density ρkg/m3 in kg/m3? | {"url":"https://justaaa.com/physics/1277016-the-surface-area-of-a-ball-is-measured-to-be-a-35","timestamp":"2024-11-04T08:19:44Z","content_type":"text/html","content_length":"39278","record_id":"<urn:uuid:9efdd636-9c88-4ad9-90d0-ab5cce334219>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00786.warc.gz"} |
My Manifesto
To keep a sharp eye on what is “right” in software development, here are a few rules I agree with.
When working on code, these are priorities:
1. Failing CI
2. User reported bugs.
3. Self reported bugs.
4. New features.
Solve bugs and dependencies in the lowest level possible. From low to high:
1. Package management.
2. Configuration management.
3. Application.
A package should be able to autonomously:
• Install.
• Update.
• Remove.
• Re-install.
A configuration management contains:
• References to packages.
• Configuration files.
• Commands to activate configuration.
• A method to persistently enable software.
• Code should be understandable by anybody. If it’s to difficult to draw on a single piece of paper, simplify it.
• Code (RPM, playbook, etc) serves the smallest functionality possible.
• Push complexity to locations that users will not be bothered, keep the “interface” as simple as possible.
• Start with code that barely works. That means some assumptions will be made.
Use dependencies when absolutely required, in other words: only use dependencies when two entities have no value without each other. This ensures:
• Code can be reused maximally.
• Code can be forked.
• Assumptions are left over to the integrator.
Keep the smallest (testable) related code in a repository. This ensures autonomous development, most independent testing and easy collaboration.
There are multiple types of code:
• The code for the application - Typically Python, C, PHP, etc.
• Code packaging - Typically RPM, NPM, or PIP.
• Code for configuration - Typically Ansible or Puppet.
• The pipeline - Typically Travis-CI or GitLab-CI.
• Test code - Typically Ansible playbooks, bats, goss or bash.
Testing (integration) happens on an environment that’s production-like.
Also; see my purpose | {"url":"https://robertdebock.nl/my-manifesto.html","timestamp":"2024-11-02T09:03:50Z","content_type":"text/html","content_length":"7219","record_id":"<urn:uuid:752e9dda-342b-41cc-9b64-537a38cd6ff8>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00707.warc.gz"} |
RFC 3447: Public-Key Cryptography Standards (PKCS) #1: RSA Cryptography Specifications Version 2.1
Network Working Group J. Jonsson
Request for Comments: 3447 B. Kaliski
Obsoletes: 2437 RSA Laboratories
Category: Informational February 2003
Public-Key Cryptography Standards (PKCS) #1: RSA Cryptography
Specifications Version 2.1
Status of this Memo
This memo provides information for the Internet community. It does
not specify an Internet standard of any kind. Distribution of this
memo is unlimited.
Copyright Notice
Copyright (C) The Internet Society (2003). All Rights Reserved.
This memo represents a republication of PKCS #1 v2.1 from RSA
Laboratories' Public-Key Cryptography Standards (PKCS) series, and
change control is retained within the PKCS process. The body of this
document is taken directly from the PKCS #1 v2.1 document, with
certain corrections made during the publication process.
Table of Contents
1. Introduction...............................................2
2. Notation...................................................3
3. Key types..................................................6
3.1 RSA public key..........................................6
3.2 RSA private key.........................................7
4. Data conversion primitives.................................8
4.1 I2OSP...................................................9
4.2 OS2IP...................................................9
5. Cryptographic primitives..................................10
5.1 Encryption and decryption primitives...................10
5.2 Signature and verification primitives..................12
6. Overview of schemes.......................................14
7. Encryption schemes........................................15
7.1 RSAES-OAEP.............................................16
7.2 RSAES-PKCS1-v1_5.......................................23
8. Signature schemes with appendix...........................27
8.1 RSASSA-PSS.............................................29
8.2 RSASSA-PKCS1-v1_5......................................32
9. Encoding methods for signatures with appendix.............35
Jonsson & Kaliski Informational [Page 1]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
9.1 EMSA-PSS...............................................36
9.2 EMSA-PKCS1-v1_5........................................41
Appendix A. ASN.1 syntax...........................................44
A.1 RSA key representation.................................44
A.2 Scheme identification..................................46
Appendix B. Supporting techniques..................................52
B.1 Hash functions.........................................52
B.2 Mask generation functions..............................54
Appendix C. ASN.1 module...........................................56
Appendix D. Intellectual Property Considerations...................63
Appendix E. Revision history.......................................64
Appendix F. References.............................................65
Appendix G. About PKCS.............................................70
Appendix H. Corrections Made During RFC Publication Process........70
Security Considerations............................................70
Authors' Addresses.................................................71
Full Copyright Statement...........................................72
1. Introduction
This document provides recommendations for the implementation of
public-key cryptography based on the RSA algorithm [42], covering the
following aspects:
* Cryptographic primitives
* Encryption schemes
* Signature schemes with appendix
* ASN.1 syntax for representing keys and for identifying the schemes
The recommendations are intended for general application within
computer and communications systems, and as such include a fair
amount of flexibility. It is expected that application standards
based on these specifications may include additional constraints.
The recommendations are intended to be compatible with the standard
IEEE-1363-2000 [26] and draft standards currently being developed by
the ANSI X9F1 [1] and IEEE P1363 [27] working groups.
This document supersedes PKCS #1 version 2.0 [35][44] but includes
compatible techniques.
Jonsson & Kaliski Informational [Page 2]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
The organization of this document is as follows:
* Section 1 is an introduction.
* Section 2 defines some notation used in this document.
* Section 3 defines the RSA public and private key types.
* Sections 4 and 5 define several primitives, or basic mathematical
operations. Data conversion primitives are in Section 4, and
cryptographic primitives (encryption-decryption, signature-
verification) are in Section 5.
* Sections 6, 7, and 8 deal with the encryption and signature
schemes in this document. Section 6 gives an overview. Along
with the methods found in PKCS #1 v1.5, Section 7 defines an
OAEP-based [3] encryption scheme and Section 8 defines a PSS-based
[4][5] signature scheme with appendix.
* Section 9 defines the encoding methods for the signature schemes
in Section 8.
* Appendix A defines the ASN.1 syntax for the keys defined in
Section 3 and the schemes in Sections 7 and 8.
* Appendix B defines the hash functions and the mask generation
function used in this document, including ASN.1 syntax for the
* Appendix C gives an ASN.1 module.
* Appendices D, E, F and G cover intellectual property issues,
outline the revision history of PKCS #1, give references to other
publications and standards, and provide general information about
the Public-Key Cryptography Standards.
2. Notation
c ciphertext representative, an integer between 0 and
C ciphertext, an octet string
d RSA private exponent
Jonsson & Kaliski Informational [Page 3]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
d_i additional factor r_i's CRT exponent, a positive
integer such that
e * d_i == 1 (mod (r_i-1)), i = 3, ..., u
dP p's CRT exponent, a positive integer such that
e * dP == 1 (mod (p-1))
dQ q's CRT exponent, a positive integer such that
e * dQ == 1 (mod (q-1))
e RSA public exponent
EM encoded message, an octet string
emBits (intended) length in bits of an encoded message EM
emLen (intended) length in octets of an encoded message EM
GCD(. , .) greatest common divisor of two nonnegative integers
Hash hash function
hLen output length in octets of hash function Hash
k length in octets of the RSA modulus n
K RSA private key
L optional RSAES-OAEP label, an octet string
LCM(., ..., .) least common multiple of a list of nonnegative
m message representative, an integer between 0 and n-1
M message, an octet string
mask MGF output, an octet string
maskLen (intended) length of the octet string mask
MGF mask generation function
mgfSeed seed from which mask is generated, an octet string
Jonsson & Kaliski Informational [Page 4]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
mLen length in octets of a message M
n RSA modulus, n = r_1 * r_2 * ... * r_u , u >= 2
(n, e) RSA public key
p, q first two prime factors of the RSA modulus n
qInv CRT coefficient, a positive integer less than p such
q * qInv == 1 (mod p)
r_i prime factors of the RSA modulus n, including r_1 = p,
r_2 = q, and additional factors if any
s signature representative, an integer between 0 and n-1
S signature, an octet string
sLen length in octets of the EMSA-PSS salt
t_i additional prime factor r_i's CRT coefficient, a
positive integer less than r_i such that
r_1 * r_2 * ... * r_(i-1) * t_i == 1 (mod r_i) ,
i = 3, ... , u
u number of prime factors of the RSA modulus, u >= 2
x a nonnegative integer
X an octet string corresponding to x
xLen (intended) length of the octet string X
0x indicator of hexadecimal representation of an octet or
an octet string; "0x48" denotes the octet with
hexadecimal value 48; "(0x)48 09 0e" denotes the
string of three consecutive octets with hexadecimal
value 48, 09, and 0e, respectively
\lambda(n) LCM(r_1-1, r_2-1, ... , r_u-1)
\xor bit-wise exclusive-or of two octet strings
Jonsson & Kaliski Informational [Page 5]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
\ceil(.) ceiling function; \ceil(x) is the smallest integer
larger than or equal to the real number x
|| concatenation operator
== congruence symbol; a == b (mod n) means that the
integer n divides the integer a - b
Note. The CRT can be applied in a non-recursive as well as a
recursive way. In this document a recursive approach following
Garner's algorithm [22] is used. See also Note 1 in Section 3.2.
3. Key types
Two key types are employed in the primitives and schemes defined in
this document: RSA public key and RSA private key. Together, an RSA
public key and an RSA private key form an RSA key pair.
This specification supports so-called "multi-prime" RSA where the
modulus may have more than two prime factors. The benefit of multi-
prime RSA is lower computational cost for the decryption and
signature primitives, provided that the CRT (Chinese Remainder
Theorem) is used. Better performance can be achieved on single
processor platforms, but to a greater extent on multiprocessor
platforms, where the modular exponentiations involved can be done in
For a discussion on how multi-prime affects the security of the RSA
cryptosystem, the reader is referred to [49].
3.1 RSA public key
For the purposes of this document, an RSA public key consists of two
n the RSA modulus, a positive integer
e the RSA public exponent, a positive integer
In a valid RSA public key, the RSA modulus n is a product of u
distinct odd primes r_i, i = 1, 2, ..., u, where u >= 2, and the RSA
public exponent e is an integer between 3 and n - 1 satisfying GCD(e,
\lambda(n)) = 1, where \lambda(n) = LCM(r_1 - 1, ..., r_u - 1). By
convention, the first two primes r_1 and r_2 may also be denoted p
and q respectively.
A recommended syntax for interchanging RSA public keys between
implementations is given in Appendix A.1.1; an implementation's
internal representation may differ.
Jonsson & Kaliski Informational [Page 6]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
3.2 RSA private key
For the purposes of this document, an RSA private key may have either
of two representations.
1. The first representation consists of the pair (n, d), where the
components have the following meanings:
n the RSA modulus, a positive integer
d the RSA private exponent, a positive integer
2. The second representation consists of a quintuple (p, q, dP, dQ,
qInv) and a (possibly empty) sequence of triplets (r_i, d_i, t_i),
i = 3, ..., u, one for each prime not in the quintuple, where the
components have the following meanings:
p the first factor, a positive integer
q the second factor, a positive integer
dP the first factor's CRT exponent, a positive integer
dQ the second factor's CRT exponent, a positive integer
qInv the (first) CRT coefficient, a positive integer
r_i the i-th factor, a positive integer
d_i the i-th factor's CRT exponent, a positive integer
t_i the i-th factor's CRT coefficient, a positive integer
In a valid RSA private key with the first representation, the RSA
modulus n is the same as in the corresponding RSA public key and is
the product of u distinct odd primes r_i, i = 1, 2, ..., u, where u
>= 2. The RSA private exponent d is a positive integer less than n
e * d == 1 (mod \lambda(n)),
where e is the corresponding RSA public exponent and \lambda(n) is
defined as in Section 3.1.
In a valid RSA private key with the second representation, the two
factors p and q are the first two prime factors of the RSA modulus n
(i.e., r_1 and r_2), the CRT exponents dP and dQ are positive
integers less than p and q respectively satisfying
e * dP == 1 (mod (p-1))
e * dQ == 1 (mod (q-1)) ,
and the CRT coefficient qInv is a positive integer less than p
q * qInv == 1 (mod p).
Jonsson & Kaliski Informational [Page 7]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
If u > 2, the representation will include one or more triplets (r_i,
d_i, t_i), i = 3, ..., u. The factors r_i are the additional prime
factors of the RSA modulus n. Each CRT exponent d_i (i = 3, ..., u)
e * d_i == 1 (mod (r_i - 1)).
Each CRT coefficient t_i (i = 3, ..., u) is a positive integer less
than r_i satisfying
R_i * t_i == 1 (mod r_i) ,
where R_i = r_1 * r_2 * ... * r_(i-1).
A recommended syntax for interchanging RSA private keys between
implementations, which includes components from both representations,
is given in Appendix A.1.2; an implementation's internal
representation may differ.
1. The definition of the CRT coefficients here and the formulas that
use them in the primitives in Section 5 generally follow Garner's
algorithm [22] (see also Algorithm 14.71 in [37]). However, for
compatibility with the representations of RSA private keys in PKCS
#1 v2.0 and previous versions, the roles of p and q are reversed
compared to the rest of the primes. Thus, the first CRT
coefficient, qInv, is defined as the inverse of q mod p, rather
than as the inverse of R_1 mod r_2, i.e., of p mod q.
2. Quisquater and Couvreur [40] observed the benefit of applying the
Chinese Remainder Theorem to RSA operations.
4. Data conversion primitives
Two data conversion primitives are employed in the schemes defined in
this document:
* I2OSP - Integer-to-Octet-String primitive
* OS2IP - Octet-String-to-Integer primitive
For the purposes of this document, and consistent with ASN.1 syntax,
an octet string is an ordered sequence of octets (eight-bit bytes).
The sequence is indexed from first (conventionally, leftmost) to last
(rightmost). For purposes of conversion to and from integers, the
first octet is considered the most significant in the following
conversion primitives.
Jonsson & Kaliski Informational [Page 8]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
4.1 I2OSP
I2OSP converts a nonnegative integer to an octet string of a
specified length.
I2OSP (x, xLen)
x nonnegative integer to be converted
xLen intended length of the resulting octet string
X corresponding octet string of length xLen
Error: "integer too large"
1. If x >= 256^xLen, output "integer too large" and stop.
2. Write the integer x in its unique xLen-digit representation in
base 256:
x = x_(xLen-1) 256^(xLen-1) + x_(xLen-2) 256^(xLen-2) + ...
+ x_1 256 + x_0,
where 0 <= x_i < 256 (note that one or more leading digits will be
zero if x is less than 256^(xLen-1)).
3. Let the octet X_i have the integer value x_(xLen-i) for 1 <= i <=
xLen. Output the octet string
X = X_1 X_2 ... X_xLen.
4.2 OS2IP
OS2IP converts an octet string to a nonnegative integer.
OS2IP (X)
X octet string to be converted
x corresponding nonnegative integer
Jonsson & Kaliski Informational [Page 9]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
1. Let X_1 X_2 ... X_xLen be the octets of X from first to last,
and let x_(xLen-i) be the integer value of the octet X_i for
1 <= i <= xLen.
2. Let x = x_(xLen-1) 256^(xLen-1) + x_(xLen-2) 256^(xLen-2) + ...
+ x_1 256 + x_0.
3. Output x.
5. Cryptographic primitives
Cryptographic primitives are basic mathematical operations on which
cryptographic schemes can be built. They are intended for
implementation in hardware or as software modules, and are not
intended to provide security apart from a scheme.
Four types of primitive are specified in this document, organized in
pairs: encryption and decryption; and signature and verification.
The specifications of the primitives assume that certain conditions
are met by the inputs, in particular that RSA public and private keys
are valid.
5.1 Encryption and decryption primitives
An encryption primitive produces a ciphertext representative from a
message representative under the control of a public key, and a
decryption primitive recovers the message representative from the
ciphertext representative under the control of the corresponding
private key.
One pair of encryption and decryption primitives is employed in the
encryption schemes defined in this document and is specified here:
RSAEP/RSADP. RSAEP and RSADP involve the same mathematical
operation, with different keys as input.
The primitives defined here are the same as IFEP-RSA/IFDP-RSA in IEEE
Std 1363-2000 [26] (except that support for multi-prime RSA has been
added) and are compatible with PKCS #1 v1.5.
The main mathematical operation in each primitive is exponentiation.
Jonsson & Kaliski Informational [Page 10]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
5.1.1 RSAEP
RSAEP ((n, e), m)
(n, e) RSA public key
m message representative, an integer between 0 and n - 1
c ciphertext representative, an integer between 0 and n - 1
Error: "message representative out of range"
Assumption: RSA public key (n, e) is valid
1. If the message representative m is not between 0 and n - 1, output
"message representative out of range" and stop.
2. Let c = m^e mod n.
3. Output c.
5.1.2 RSADP
RSADP (K, c)
K RSA private key, where K has one of the following forms:
- a pair (n, d)
- a quintuple (p, q, dP, dQ, qInv) and a possibly empty
sequence of triplets (r_i, d_i, t_i), i = 3, ..., u
c ciphertext representative, an integer between 0 and n - 1
m message representative, an integer between 0 and n - 1
Error: "ciphertext representative out of range"
Assumption: RSA private key K is valid
Jonsson & Kaliski Informational [Page 11]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
1. If the ciphertext representative c is not between 0 and n - 1,
output "ciphertext representative out of range" and stop.
2. The message representative m is computed as follows.
a. If the first form (n, d) of K is used, let m = c^d mod n.
b. If the second form (p, q, dP, dQ, qInv) and (r_i, d_i, t_i)
of K is used, proceed as follows:
i. Let m_1 = c^dP mod p and m_2 = c^dQ mod q.
ii. If u > 2, let m_i = c^(d_i) mod r_i, i = 3, ..., u.
iii. Let h = (m_1 - m_2) * qInv mod p.
iv. Let m = m_2 + q * h.
v. If u > 2, let R = r_1 and for i = 3 to u do
1. Let R = R * r_(i-1).
2. Let h = (m_i - m) * t_i mod r_i.
3. Let m = m + R * h.
3. Output m.
Note. Step 2.b can be rewritten as a single loop, provided that one
reverses the order of p and q. For consistency with PKCS #1 v2.0,
however, the first two primes p and q are treated separately from
the additional primes.
5.2 Signature and verification primitives
A signature primitive produces a signature representative from a
message representative under the control of a private key, and a
verification primitive recovers the message representative from the
signature representative under the control of the corresponding
public key. One pair of signature and verification primitives is
employed in the signature schemes defined in this document and is
specified here: RSASP1/RSAVP1.
The primitives defined here are the same as IFSP-RSA1/IFVP-RSA1 in
IEEE 1363-2000 [26] (except that support for multi-prime RSA has
been added) and are compatible with PKCS #1 v1.5.
Jonsson & Kaliski Informational [Page 12]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
The main mathematical operation in each primitive is
exponentiation, as in the encryption and decryption primitives of
Section 5.1. RSASP1 and RSAVP1 are the same as RSADP and RSAEP
except for the names of their input and output arguments; they are
distinguished as they are intended for different purposes.
5.2.1 RSASP1
RSASP1 (K, m)
K RSA private key, where K has one of the following forms:
- a pair (n, d)
- a quintuple (p, q, dP, dQ, qInv) and a (possibly empty)
sequence of triplets (r_i, d_i, t_i), i = 3, ..., u
m message representative, an integer between 0 and n - 1
s signature representative, an integer between 0 and n - 1
Error: "message representative out of range"
Assumption: RSA private key K is valid
1. If the message representative m is not between 0 and n - 1,
output "message representative out of range" and stop.
2. The signature representative s is computed as follows.
a. If the first form (n, d) of K is used, let s = m^d mod n.
b. If the second form (p, q, dP, dQ, qInv) and (r_i, d_i, t_i)
of K is used, proceed as follows:
i. Let s_1 = m^dP mod p and s_2 = m^dQ mod q.
ii. If u > 2, let s_i = m^(d_i) mod r_i, i = 3, ..., u.
iii. Let h = (s_1 - s_2) * qInv mod p.
iv. Let s = s_2 + q * h.
v. If u > 2, let R = r_1 and for i = 3 to u do
1. Let R = R * r_(i-1).
Jonsson & Kaliski Informational [Page 13]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
2. Let h = (s_i - s) * t_i mod r_i.
3. Let s = s + R * h.
3. Output s.
Note. Step 2.b can be rewritten as a single loop, provided that one
reverses the order of p and q. For consistency with PKCS #1 v2.0,
however, the first two primes p and q are treated separately from the
additional primes.
5.2.2 RSAVP1
RSAVP1 ((n, e), s)
(n, e) RSA public key
s signature representative, an integer between 0 and n - 1
m message representative, an integer between 0 and n - 1
Error: "signature representative out of range"
Assumption: RSA public key (n, e) is valid
1. If the signature representative s is not between 0 and n - 1,
output "signature representative out of range" and stop.
2. Let m = s^e mod n.
3. Output m.
6. Overview of schemes
A scheme combines cryptographic primitives and other techniques to
achieve a particular security goal. Two types of scheme are
specified in this document: encryption schemes and signature schemes
with appendix.
The schemes specified in this document are limited in scope in that
their operations consist only of steps to process data with an RSA
public or private key, and do not include steps for obtaining or
validating the key. Thus, in addition to the scheme operations, an
application will typically include key management operations by which
Jonsson & Kaliski Informational [Page 14]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
parties may select RSA public and private keys for a scheme
operation. The specific additional operations and other details are
outside the scope of this document.
As was the case for the cryptographic primitives (Section 5), the
specifications of scheme operations assume that certain conditions
are met by the inputs, in particular that RSA public and private keys
are valid. The behavior of an implementation is thus unspecified
when a key is invalid. The impact of such unspecified behavior
depends on the application. Possible means of addressing key
validation include explicit key validation by the application; key
validation within the public-key infrastructure; and assignment of
liability for operations performed with an invalid key to the party
who generated the key.
A generally good cryptographic practice is to employ a given RSA key
pair in only one scheme. This avoids the risk that vulnerability in
one scheme may compromise the security of the other, and may be
essential to maintain provable security. While RSAES-PKCS1-v1_5
(Section 7.2) and RSASSA-PKCS1-v1_5 (Section 8.2) have traditionally
been employed together without any known bad interactions (indeed,
this is the model introduced by PKCS #1 v1.5), such a combined use of
an RSA key pair is not recommended for new applications.
To illustrate the risks related to the employment of an RSA key pair
in more than one scheme, suppose an RSA key pair is employed in both
RSAES-OAEP (Section 7.1) and RSAES-PKCS1-v1_5. Although RSAES-OAEP
by itself would resist attack, an opponent might be able to exploit a
weakness in the implementation of RSAES-PKCS1-v1_5 to recover
messages encrypted with either scheme. As another example, suppose
an RSA key pair is employed in both RSASSA-PSS (Section 8.1) and
RSASSA-PKCS1-v1_5. Then the security proof for RSASSA-PSS would no
longer be sufficient since the proof does not account for the
possibility that signatures might be generated with a second scheme.
Similar considerations may apply if an RSA key pair is employed in
one of the schemes defined here and in a variant defined elsewhere.
7. Encryption schemes
For the purposes of this document, an encryption scheme consists of
an encryption operation and a decryption operation, where the
encryption operation produces a ciphertext from a message with a
recipient's RSA public key, and the decryption operation recovers the
message from the ciphertext with the recipient's corresponding RSA
private key.
Jonsson & Kaliski Informational [Page 15]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
An encryption scheme can be employed in a variety of applications. A
typical application is a key establishment protocol, where the
message contains key material to be delivered confidentially from one
party to another. For instance, PKCS #7 [45] employs such a protocol
to deliver a content-encryption key from a sender to a recipient; the
encryption schemes defined here would be suitable key-encryption
algorithms in that context.
Two encryption schemes are specified in this document: RSAES-OAEP and
RSAES-PKCS1-v1_5. RSAES-OAEP is recommended for new applications;
RSAES-PKCS1-v1_5 is included only for compatibility with existing
applications, and is not recommended for new applications.
The encryption schemes given here follow a general model similar to
that employed in IEEE Std 1363-2000 [26], combining encryption and
decryption primitives with an encoding method for encryption. The
encryption operations apply a message encoding operation to a message
to produce an encoded message, which is then converted to an integer
message representative. An encryption primitive is applied to the
message representative to produce the ciphertext. Reversing this,
the decryption operations apply a decryption primitive to the
ciphertext to recover a message representative, which is then
converted to an octet string encoded message. A message decoding
operation is applied to the encoded message to recover the message
and verify the correctness of the decryption.
To avoid implementation weaknesses related to the way errors are
handled within the decoding operation (see [6] and [36]), the
encoding and decoding operations for RSAES-OAEP and RSAES-PKCS1-v1_5
are embedded in the specifications of the respective encryption
schemes rather than defined in separate specifications. Both
encryption schemes are compatible with the corresponding schemes in
PKCS #1 v2.0.
7.1 RSAES-OAEP
RSAES-OAEP combines the RSAEP and RSADP primitives (Sections 5.1.1
and 5.1.2) with the EME-OAEP encoding method (step 1.b in Section
7.1.1 and step 3 in Section 7.1.2). EME-OAEP is based on Bellare and
Rogaway's Optimal Asymmetric Encryption scheme [3]. (OAEP stands for
"Optimal Asymmetric Encryption Padding."). It is compatible with the
IFES scheme defined in IEEE Std 1363-2000 [26], where the encryption
and decryption primitives are IFEP-RSA and IFDP-RSA and the message
encoding method is EME-OAEP. RSAES-OAEP can operate on messages of
length up to k - 2hLen - 2 octets, where hLen is the length of the
output from the underlying hash function and k is the length in
octets of the recipient's RSA modulus.
Jonsson & Kaliski Informational [Page 16]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
Assuming that computing e-th roots modulo n is infeasible and the
mask generation function in RSAES-OAEP has appropriate properties,
RSAES-OAEP is semantically secure against adaptive chosen-ciphertext
attacks. This assurance is provable in the sense that the difficulty
of breaking RSAES-OAEP can be directly related to the difficulty of
inverting the RSA function, provided that the mask generation
function is viewed as a black box or random oracle; see [21] and the
note below for further discussion.
Both the encryption and the decryption operations of RSAES-OAEP take
the value of a label L as input. In this version of PKCS #1, L is
the empty string; other uses of the label are outside the scope of
this document. See Appendix A.2.1 for the relevant ASN.1 syntax.
RSAES-OAEP is parameterized by the choice of hash function and mask
generation function. This choice should be fixed for a given RSA
key. Suggested hash and mask generation functions are given in
Appendix B.
Note. Recent results have helpfully clarified the security
properties of the OAEP encoding method [3] (roughly the procedure
described in step 1.b in Section 7.1.1). The background is as
follows. In 1994, Bellare and Rogaway [3] introduced a security
concept that they denoted plaintext awareness (PA94). They proved
that if a deterministic public-key encryption primitive (e.g., RSAEP)
is hard to invert without the private key, then the corresponding
OAEP-based encryption scheme is plaintext-aware (in the random oracle
model), meaning roughly that an adversary cannot produce a valid
ciphertext without actually "knowing" the underlying plaintext.
Plaintext awareness of an encryption scheme is closely related to the
resistance of the scheme against chosen-ciphertext attacks. In such
attacks, an adversary is given the opportunity to send queries to an
oracle simulating the decryption primitive. Using the results of
these queries, the adversary attempts to decrypt a challenge
However, there are two flavors of chosen-ciphertext attacks, and PA94
implies security against only one of them. The difference relies on
what the adversary is allowed to do after she is given the challenge
ciphertext. The indifferent attack scenario (denoted CCA1) does not
admit any queries to the decryption oracle after the adversary is
given the challenge ciphertext, whereas the adaptive scenario
(denoted CCA2) does (except that the decryption oracle refuses to
decrypt the challenge ciphertext once it is published). In 1998,
Bellare and Rogaway, together with Desai and Pointcheval [2], came up
with a new, stronger notion of plaintext awareness (PA98) that does
imply security against CCA2.
Jonsson & Kaliski Informational [Page 17]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
To summarize, there have been two potential sources for
misconception: that PA94 and PA98 are equivalent concepts; or that
CCA1 and CCA2 are equivalent concepts. Either assumption leads to
the conclusion that the Bellare-Rogaway paper implies security of
OAEP against CCA2, which it does not.
(Footnote: It might be fair to mention that PKCS #1 v2.0 cites [3]
and claims that "a chosen ciphertext attack is ineffective against a
plaintext-aware encryption scheme such as RSAES-OAEP" without
specifying the kind of plaintext awareness or chosen ciphertext
attack considered.)
OAEP has never been proven secure against CCA2; in fact, Victor Shoup
[48] has demonstrated that such a proof does not exist in the general
case. Put briefly, Shoup showed that an adversary in the CCA2
scenario who knows how to partially invert the encryption primitive
but does not know how to invert it completely may well be able to
break the scheme. For example, one may imagine an attacker who is
able to break RSAES-OAEP if she knows how to recover all but the
first 20 bytes of a random integer encrypted with RSAEP. Such an
attacker does not need to be able to fully invert RSAEP, because she
does not use the first 20 octets in her attack.
Still, RSAES-OAEP is secure against CCA2, which was proved by
Fujisaki, Okamoto, Pointcheval, and Stern [21] shortly after the
announcement of Shoup's result. Using clever lattice reduction
techniques, they managed to show how to invert RSAEP completely given
a sufficiently large part of the pre-image. This observation,
combined with a proof that OAEP is secure against CCA2 if the
underlying encryption primitive is hard to partially invert, fills
the gap between what Bellare and Rogaway proved about RSAES-OAEP and
what some may have believed that they proved. Somewhat
paradoxically, we are hence saved by an ostensible weakness in RSAEP
(i.e., the whole inverse can be deduced from parts of it).
Unfortunately however, the security reduction is not efficient for
concrete parameters. While the proof successfully relates an
adversary Adv against the CCA2 security of RSAES-OAEP to an algorithm
Inv inverting RSA, the probability of success for Inv is only
approximately \epsilon^2 / 2^18, where \epsilon is the probability of
success for Adv.
(Footnote: In [21] the probability of success for the inverter was
\epsilon^2 / 4. The additional factor 1 / 2^16 is due to the eight
fixed zero bits at the beginning of the encoded message EM, which are
not present in the variant of OAEP considered in [21] (Inv must apply
Adv twice to invert RSA, and each application corresponds to a factor
1 / 2^8).)
Jonsson & Kaliski Informational [Page 18]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
In addition, the running time for Inv is approximately t^2, where t
is the running time of the adversary. The consequence is that we
cannot exclude the possibility that attacking RSAES-OAEP is
considerably easier than inverting RSA for concrete parameters.
Still, the existence of a security proof provides some assurance that
the RSAES-OAEP construction is sounder than ad hoc constructions such
as RSAES-PKCS1-v1_5.
Hybrid encryption schemes based on the RSA-KEM key encapsulation
paradigm offer tight proofs of security directly applicable to
concrete parameters; see [30] for discussion. Future versions of
PKCS #1 may specify schemes based on this paradigm.
7.1.1 Encryption operation
RSAES-OAEP-ENCRYPT ((n, e), M, L)
Hash hash function (hLen denotes the length in octets of the hash
function output)
MGF mask generation function
(n, e) recipient's RSA public key (k denotes the length in octets
of the RSA modulus n)
M message to be encrypted, an octet string of length mLen,
where mLen <= k - 2hLen - 2
L optional label to be associated with the message; the
default value for L, if L is not provided, is the empty
C ciphertext, an octet string of length k
Errors: "message too long"; "label too long"
Assumption: RSA public key (n, e) is valid
1. Length checking:
a. If the length of L is greater than the input limitation for the
hash function (2^61 - 1 octets for SHA-1), output "label too
long" and stop.
b. If mLen > k - 2hLen - 2, output "message too long" and stop.
Jonsson & Kaliski Informational [Page 19]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
2. EME-OAEP encoding (see Figure 1 below):
a. If the label L is not provided, let L be the empty string. Let
lHash = Hash(L), an octet string of length hLen (see the note
b. Generate an octet string PS consisting of k - mLen - 2hLen - 2
zero octets. The length of PS may be zero.
c. Concatenate lHash, PS, a single octet with hexadecimal value
0x01, and the message M to form a data block DB of length k -
hLen - 1 octets as
DB = lHash || PS || 0x01 || M.
d. Generate a random octet string seed of length hLen.
e. Let dbMask = MGF(seed, k - hLen - 1).
f. Let maskedDB = DB \xor dbMask.
g. Let seedMask = MGF(maskedDB, hLen).
h. Let maskedSeed = seed \xor seedMask.
i. Concatenate a single octet with hexadecimal value 0x00,
maskedSeed, and maskedDB to form an encoded message EM of
length k octets as
EM = 0x00 || maskedSeed || maskedDB.
3. RSA encryption:
a. Convert the encoded message EM to an integer message
representative m (see Section 4.2):
m = OS2IP (EM).
b. Apply the RSAEP encryption primitive (Section 5.1.1) to the RSA
public key (n, e) and the message representative m to produce
an integer ciphertext representative c:
c = RSAEP ((n, e), m).
c. Convert the ciphertext representative c to a ciphertext C of
length k octets (see Section 4.1):
C = I2OSP (c, k).
Jonsson & Kaliski Informational [Page 20]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
4. Output the ciphertext C.
Note. If L is the empty string, the corresponding hash value lHash
has the following hexadecimal representation for different choices of
SHA-1: (0x)da39a3ee 5e6b4b0d 3255bfef 95601890 afd80709
SHA-256: (0x)e3b0c442 98fc1c14 9afbf4c8 996fb924 27ae41e4 649b934c
a495991b 7852b855
SHA-384: (0x)38b060a7 51ac9638 4cd9327e b1b1e36a 21fdb711 14be0743
4c0cc7bf 63f6e1da 274edebf e76f65fb d51ad2f1 4898b95b
SHA-512: (0x)cf83e135 7eefb8bd f1542850 d66d8007 d620e405 0b5715dc
83f4a921 d36ce9ce 47d0d13c 5d85f2b0 ff8318d2 877eec2f
63b931bd 47417a81 a538327a f927da3e
DB = | lHash | PS | M |
+----------+ V
| seed |--> MGF ---> xor
+----------+ |
| |
+--+ V |
|00| xor <----- MGF <-----|
+--+ | |
| | |
V V V
EM = |00|maskedSeed| maskedDB |
Figure 1: EME-OAEP encoding operation. lHash is the hash of the
optional label L. Decoding operation follows reverse steps to
recover M and verify lHash and PS.
7.1.2 Decryption operation
RSAES-OAEP-DECRYPT (K, C, L)
Hash hash function (hLen denotes the length in octets of the hash
function output)
MGF mask generation function
Jonsson & Kaliski Informational [Page 21]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
K recipient's RSA private key (k denotes the length in octets
of the RSA modulus n)
C ciphertext to be decrypted, an octet string of length k,
where k = 2hLen + 2
L optional label whose association with the message is to be
verified; the default value for L, if L is not provided, is
the empty string
M message, an octet string of length mLen, where mLen <= k -
2hLen - 2
Error: "decryption error"
1. Length checking:
a. If the length of L is greater than the input limitation for the
hash function (2^61 - 1 octets for SHA-1), output "decryption
error" and stop.
b. If the length of the ciphertext C is not k octets, output
"decryption error" and stop.
c. If k < 2hLen + 2, output "decryption error" and stop.
2. RSA decryption:
a. Convert the ciphertext C to an integer ciphertext
representative c (see Section 4.2):
c = OS2IP (C).
b. Apply the RSADP decryption primitive (Section 5.1.2) to the
RSA private key K and the ciphertext representative c to
produce an integer message representative m:
m = RSADP (K, c).
If RSADP outputs "ciphertext representative out of range"
(meaning that c >= n), output "decryption error" and stop.
c. Convert the message representative m to an encoded message EM
of length k octets (see Section 4.1):
EM = I2OSP (m, k).
Jonsson & Kaliski Informational [Page 22]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
3. EME-OAEP decoding:
a. If the label L is not provided, let L be the empty string. Let
lHash = Hash(L), an octet string of length hLen (see the note
in Section 7.1.1).
b. Separate the encoded message EM into a single octet Y, an octet
string maskedSeed of length hLen, and an octet string maskedDB
of length k - hLen - 1 as
EM = Y || maskedSeed || maskedDB.
c. Let seedMask = MGF(maskedDB, hLen).
d. Let seed = maskedSeed \xor seedMask.
e. Let dbMask = MGF(seed, k - hLen - 1).
f. Let DB = maskedDB \xor dbMask.
g. Separate DB into an octet string lHash' of length hLen, a
(possibly empty) padding string PS consisting of octets with
hexadecimal value 0x00, and a message M as
DB = lHash' || PS || 0x01 || M.
If there is no octet with hexadecimal value 0x01 to separate PS
from M, if lHash does not equal lHash', or if Y is nonzero,
output "decryption error" and stop. (See the note below.)
4. Output the message M.
Note. Care must be taken to ensure that an opponent cannot
distinguish the different error conditions in Step 3.g, whether by
error message or timing, or, more generally, learn partial
information about the encoded message EM. Otherwise an opponent may
be able to obtain useful information about the decryption of the
ciphertext C, leading to a chosen-ciphertext attack such as the one
observed by Manger [36].
7.2 RSAES-PKCS1-v1_5
RSAES-PKCS1-v1_5 combines the RSAEP and RSADP primitives (Sections
5.1.1 and 5.1.2) with the EME-PKCS1-v1_5 encoding method (step 1 in
Section 7.2.1 and step 3 in Section 7.2.2). It is mathematically
equivalent to the encryption scheme in PKCS #1 v1.5. RSAES-PKCS1-
v1_5 can operate on messages of length up to k - 11 octets (k is the
octet length of the RSA modulus), although care should be taken to
Jonsson & Kaliski Informational [Page 23]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
avoid certain attacks on low-exponent RSA due to Coppersmith,
Franklin, Patarin, and Reiter when long messages are encrypted (see
the third bullet in the notes below and [10]; [14] contains an
improved attack). As a general rule, the use of this scheme for
encrypting an arbitrary message, as opposed to a randomly generated
key, is not recommended.
It is possible to generate valid RSAES-PKCS1-v1_5 ciphertexts without
knowing the corresponding plaintexts, with a reasonable probability
of success. This ability can be exploited in a chosen- ciphertext
attack as shown in [6]. Therefore, if RSAES-PKCS1-v1_5 is to be
used, certain easily implemented countermeasures should be taken to
thwart the attack found in [6]. Typical examples include the
addition of structure to the data to be encoded, rigorous checking of
PKCS #1 v1.5 conformance (and other redundancy) in decrypted
messages, and the consolidation of error messages in a client-server
protocol based on PKCS #1 v1.5. These can all be effective
countermeasures and do not involve changes to a PKCS #1 v1.5-based
protocol. See [7] for a further discussion of these and other
countermeasures. It has recently been shown that the security of the
SSL/TLS handshake protocol [17], which uses RSAES-PKCS1-v1_5 and
certain countermeasures, can be related to a variant of the RSA
problem; see [32] for discussion.
Note. The following passages describe some security recommendations
pertaining to the use of RSAES-PKCS1-v1_5. Recommendations from
version 1.5 of this document are included as well as new
recommendations motivated by cryptanalytic advances made in the
intervening years.
* It is recommended that the pseudorandom octets in step 2 in
Section 7.2.1 be generated independently for each encryption
process, especially if the same data is input to more than one
encryption process. Haastad's results [24] are one motivation for
this recommendation.
* The padding string PS in step 2 in Section 7.2.1 is at least eight
octets long, which is a security condition for public-key
operations that makes it difficult for an attacker to recover data
by trying all possible encryption blocks.
* The pseudorandom octets can also help thwart an attack due to
Coppersmith et al. [10] (see [14] for an improvement of the
attack) when the size of the message to be encrypted is kept
small. The attack works on low-exponent RSA when similar messages
are encrypted with the same RSA public key. More specifically, in
one flavor of the attack, when two inputs to RSAEP agree on a
large fraction of bits (8/9) and low-exponent RSA (e = 3) is used
Jonsson & Kaliski Informational [Page 24]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
to encrypt both of them, it may be possible to recover both inputs
with the attack. Another flavor of the attack is successful in
decrypting a single ciphertext when a large fraction (2/3) of the
input to RSAEP is already known. For typical applications, the
message to be encrypted is short (e.g., a 128-bit symmetric key)
so not enough information will be known or common between two
messages to enable the attack. However, if a long message is
encrypted, or if part of a message is known, then the attack may
be a concern. In any case, the RSAES-OAEP scheme overcomes the
7.2.1 Encryption operation
RSAES-PKCS1-V1_5-ENCRYPT ((n, e), M)
(n, e) recipient's RSA public key (k denotes the length in octets
of the modulus n)
M message to be encrypted, an octet string of length mLen,
where mLen <= k - 11
C ciphertext, an octet string of length k
Error: "message too long"
1. Length checking: If mLen > k - 11, output "message too long" and
2. EME-PKCS1-v1_5 encoding:
a. Generate an octet string PS of length k - mLen - 3 consisting
of pseudo-randomly generated nonzero octets. The length of PS
will be at least eight octets.
b. Concatenate PS, the message M, and other padding to form an
encoded message EM of length k octets as
EM = 0x00 || 0x02 || PS || 0x00 || M.
Jonsson & Kaliski Informational [Page 25]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
3. RSA encryption:
a. Convert the encoded message EM to an integer message
representative m (see Section 4.2):
m = OS2IP (EM).
b. Apply the RSAEP encryption primitive (Section 5.1.1) to the RSA
public key (n, e) and the message representative m to produce
an integer ciphertext representative c:
c = RSAEP ((n, e), m).
c. Convert the ciphertext representative c to a ciphertext C of
length k octets (see Section 4.1):
C = I2OSP (c, k).
4. Output the ciphertext C.
7.2.2 Decryption operation
RSAES-PKCS1-V1_5-DECRYPT (K, C)
K recipient's RSA private key
C ciphertext to be decrypted, an octet string of length k,
where k is the length in octets of the RSA modulus n
M message, an octet string of length at most k - 11
Error: "decryption error"
1. Length checking: If the length of the ciphertext C is not k octets
(or if k < 11), output "decryption error" and stop.
2. RSA decryption:
a. Convert the ciphertext C to an integer ciphertext
representative c (see Section 4.2):
c = OS2IP (C).
Jonsson & Kaliski Informational [Page 26]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
b. Apply the RSADP decryption primitive (Section 5.1.2) to the RSA
private key (n, d) and the ciphertext representative c to
produce an integer message representative m:
m = RSADP ((n, d), c).
If RSADP outputs "ciphertext representative out of range"
(meaning that c >= n), output "decryption error" and stop.
c. Convert the message representative m to an encoded message EM
of length k octets (see Section 4.1):
EM = I2OSP (m, k).
3. EME-PKCS1-v1_5 decoding: Separate the encoded message EM into an
octet string PS consisting of nonzero octets and a message M as
EM = 0x00 || 0x02 || PS || 0x00 || M.
If the first octet of EM does not have hexadecimal value 0x00, if
the second octet of EM does not have hexadecimal value 0x02, if
there is no octet with hexadecimal value 0x00 to separate PS from
M, or if the length of PS is less than 8 octets, output
"decryption error" and stop. (See the note below.)
4. Output M.
Note. Care shall be taken to ensure that an opponent cannot
distinguish the different error conditions in Step 3, whether by
error message or timing. Otherwise an opponent may be able to obtain
useful information about the decryption of the ciphertext C, leading
to a strengthened version of Bleichenbacher's attack [6]; compare to
Manger's attack [36].
8. Signature schemes with appendix
For the purposes of this document, a signature scheme with appendix
consists of a signature generation operation and a signature
verification operation, where the signature generation operation
produces a signature from a message with a signer's RSA private key,
and the signature verification operation verifies the signature on
the message with the signer's corresponding RSA public key. To
verify a signature constructed with this type of scheme it is
necessary to have the message itself. In this way, signature schemes
with appendix are distinguished from signature schemes with message
recovery, which are not supported in this document.
Jonsson & Kaliski Informational [Page 27]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
A signature scheme with appendix can be employed in a variety of
applications. For instance, the signature schemes with appendix
defined here would be suitable signature algorithms for X.509
certificates [28]. Related signature schemes could be employed in
PKCS #7 [45], although for technical reasons the current version of
PKCS #7 separates a hash function from a signature scheme, which is
different than what is done here; see the note in Appendix A.2.3 for
more discussion.
Two signature schemes with appendix are specified in this document:
RSASSA-PSS and RSASSA-PKCS1-v1_5. Although no attacks are known
against RSASSA-PKCS1-v1_5, in the interest of increased robustness,
RSASSA-PSS is recommended for eventual adoption in new applications.
RSASSA-PKCS1-v1_5 is included for compatibility with existing
applications, and while still appropriate for new applications, a
gradual transition to RSASSA-PSS is encouraged.
The signature schemes with appendix given here follow a general model
similar to that employed in IEEE Std 1363-2000 [26], combining
signature and verification primitives with an encoding method for
signatures. The signature generation operations apply a message
encoding operation to a message to produce an encoded message, which
is then converted to an integer message representative. A signature
primitive is applied to the message representative to produce the
signature. Reversing this, the signature verification operations
apply a signature verification primitive to the signature to recover
a message representative, which is then converted to an octet string
encoded message. A verification operation is applied to the message
and the encoded message to determine whether they are consistent.
If the encoding method is deterministic (e.g., EMSA-PKCS1-v1_5), the
verification operation may apply the message encoding operation to
the message and compare the resulting encoded message to the
previously derived encoded message. If there is a match, the
signature is considered valid. If the method is randomized (e.g.,
EMSA-PSS), the verification operation is typically more complicated.
For example, the verification operation in EMSA-PSS extracts the
random salt and a hash output from the encoded message and checks
whether the hash output, the salt, and the message are consistent;
the hash output is a deterministic function in terms of the message
and the salt.
For both signature schemes with appendix defined in this document,
the signature generation and signature verification operations are
readily implemented as "single-pass" operations if the signature is
placed after the message. See PKCS #7 [45] for an example format in
the case of RSASSA-PKCS1-v1_5.
Jonsson & Kaliski Informational [Page 28]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
8.1 RSASSA-PSS
RSASSA-PSS combines the RSASP1 and RSAVP1 primitives with the EMSA-
PSS encoding method. It is compatible with the IFSSA scheme as
amended in the IEEE P1363a draft [27], where the signature and
verification primitives are IFSP-RSA1 and IFVP-RSA1 as defined in
IEEE Std 1363-2000 [26] and the message encoding method is EMSA4.
EMSA4 is slightly more general than EMSA-PSS as it acts on bit
strings rather than on octet strings. EMSA-PSS is equivalent to
EMSA4 restricted to the case that the operands as well as the hash
and salt values are octet strings.
The length of messages on which RSASSA-PSS can operate is either
unrestricted or constrained by a very large number, depending on the
hash function underlying the EMSA-PSS encoding method.
Assuming that computing e-th roots modulo n is infeasible and the
hash and mask generation functions in EMSA-PSS have appropriate
properties, RSASSA-PSS provides secure signatures. This assurance is
provable in the sense that the difficulty of forging signatures can
be directly related to the difficulty of inverting the RSA function,
provided that the hash and mask generation functions are viewed as
black boxes or random oracles. The bounds in the security proof are
essentially "tight", meaning that the success probability and running
time for the best forger against RSASSA-PSS are very close to the
corresponding parameters for the best RSA inversion algorithm; see
[4][13][31] for further discussion.
In contrast to the RSASSA-PKCS1-v1_5 signature scheme, a hash
function identifier is not embedded in the EMSA-PSS encoded message,
so in theory it is possible for an adversary to substitute a
different (and potentially weaker) hash function than the one
selected by the signer. Therefore, it is recommended that the EMSA-
PSS mask generation function be based on the same hash function. In
this manner the entire encoded message will be dependent on the hash
function and it will be difficult for an opponent to substitute a
different hash function than the one intended by the signer. This
matching of hash functions is only for the purpose of preventing hash
function substitution, and is not necessary if hash function
substitution is addressed by other means (e.g., the verifier accepts
only a designated hash function). See [34] for further discussion of
these points. The provable security of RSASSA-PSS does not rely on
the hash function in the mask generation function being the same as
the hash function applied to the message.
RSASSA-PSS is different from other RSA-based signature schemes in
that it is probabilistic rather than deterministic, incorporating a
randomly generated salt value. The salt value enhances the security
Jonsson & Kaliski Informational [Page 29]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
of the scheme by affording a "tighter" security proof than
deterministic alternatives such as Full Domain Hashing (FDH); see [4]
for discussion. However, the randomness is not critical to security.
In situations where random generation is not possible, a fixed value
or a sequence number could be employed instead, with the resulting
provable security similar to that of FDH [12].
8.1.1 Signature generation operation
RSASSA-PSS-SIGN (K, M)
K signer's RSA private key
M message to be signed, an octet string
S signature, an octet string of length k, where k is the
length in octets of the RSA modulus n
Errors: "message too long;" "encoding error"
1. EMSA-PSS encoding: Apply the EMSA-PSS encoding operation (Section
9.1.1) to the message M to produce an encoded message EM of length
\ceil ((modBits - 1)/8) octets such that the bit length of the
integer OS2IP (EM) (see Section 4.2) is at most modBits - 1, where
modBits is the length in bits of the RSA modulus n:
EM = EMSA-PSS-ENCODE (M, modBits - 1).
Note that the octet length of EM will be one less than k if
modBits - 1 is divisible by 8 and equal to k otherwise. If the
encoding operation outputs "message too long," output "message too
long" and stop. If the encoding operation outputs "encoding
error," output "encoding error" and stop.
2. RSA signature:
a. Convert the encoded message EM to an integer message
representative m (see Section 4.2):
m = OS2IP (EM).
Jonsson & Kaliski Informational [Page 30]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
b. Apply the RSASP1 signature primitive (Section 5.2.1) to the RSA
private key K and the message representative m to produce an
integer signature representative s:
s = RSASP1 (K, m).
c. Convert the signature representative s to a signature S of
length k octets (see Section 4.1):
S = I2OSP (s, k).
3. Output the signature S.
8.1.2 Signature verification operation
RSASSA-PSS-VERIFY ((n, e), M, S)
(n, e) signer's RSA public key
M message whose signature is to be verified, an octet string
S signature to be verified, an octet string of length k, where
k is the length in octets of the RSA modulus n
"valid signature" or "invalid signature"
1. Length checking: If the length of the signature S is not k octets,
output "invalid signature" and stop.
2. RSA verification:
a. Convert the signature S to an integer signature representative
s (see Section 4.2):
s = OS2IP (S).
b. Apply the RSAVP1 verification primitive (Section 5.2.2) to the
RSA public key (n, e) and the signature representative s to
produce an integer message representative m:
m = RSAVP1 ((n, e), s).
If RSAVP1 output "signature representative out of range,"
output "invalid signature" and stop.
Jonsson & Kaliski Informational [Page 31]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
c. Convert the message representative m to an encoded message EM
of length emLen = \ceil ((modBits - 1)/8) octets, where modBits
is the length in bits of the RSA modulus n (see Section 4.1):
EM = I2OSP (m, emLen).
Note that emLen will be one less than k if modBits - 1 is
divisible by 8 and equal to k otherwise. If I2OSP outputs
"integer too large," output "invalid signature" and stop.
3. EMSA-PSS verification: Apply the EMSA-PSS verification operation
(Section 9.1.2) to the message M and the encoded message EM to
determine whether they are consistent:
Result = EMSA-PSS-VERIFY (M, EM, modBits - 1).
4. If Result = "consistent," output "valid signature." Otherwise,
output "invalid signature."
8.2. RSASSA-PKCS1-v1_5
RSASSA-PKCS1-v1_5 combines the RSASP1 and RSAVP1 primitives with the
EMSA-PKCS1-v1_5 encoding method. It is compatible with the IFSSA
scheme defined in IEEE Std 1363-2000 [26], where the signature and
verification primitives are IFSP-RSA1 and IFVP-RSA1 and the message
encoding method is EMSA-PKCS1-v1_5 (which is not defined in IEEE Std
1363-2000, but is in the IEEE P1363a draft [27]).
The length of messages on which RSASSA-PKCS1-v1_5 can operate is
either unrestricted or constrained by a very large number, depending
on the hash function underlying the EMSA-PKCS1-v1_5 method.
Assuming that computing e-th roots modulo n is infeasible and the
hash function in EMSA-PKCS1-v1_5 has appropriate properties, RSASSA-
PKCS1-v1_5 is conjectured to provide secure signatures. More
precisely, forging signatures without knowing the RSA private key is
conjectured to be computationally infeasible. Also, in the encoding
method EMSA-PKCS1-v1_5, a hash function identifier is embedded in the
encoding. Because of this feature, an adversary trying to find a
message with the same signature as a previously signed message must
find collisions of the particular hash function being used; attacking
a different hash function than the one selected by the signer is not
useful to the adversary. See [34] for further discussion.
Note. As noted in PKCS #1 v1.5, the EMSA-PKCS1-v1_5 encoding method
has the property that the encoded message, converted to an integer
message representative, is guaranteed to be large and at least
somewhat "random". This prevents attacks of the kind proposed by
Jonsson & Kaliski Informational [Page 32]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
Desmedt and Odlyzko [16] where multiplicative relationships between
message representatives are developed by factoring the message
representatives into a set of small values (e.g., a set of small
primes). Coron, Naccache, and Stern [15] showed that a stronger form
of this type of attack could be quite effective against some
instances of the ISO/IEC 9796-2 signature scheme. They also analyzed
the complexity of this type of attack against the EMSA-PKCS1-v1_5
encoding method and concluded that an attack would be impractical,
requiring more operations than a collision search on the underlying
hash function (i.e., more than 2^80 operations). Coppersmith,
Halevi, and Jutla [11] subsequently extended Coron et al.'s attack to
break the ISO/IEC 9796-1 signature scheme with message recovery. The
various attacks illustrate the importance of carefully constructing
the input to the RSA signature primitive, particularly in a signature
scheme with message recovery. Accordingly, the EMSA-PKCS-v1_5
encoding method explicitly includes a hash operation and is not
intended for signature schemes with message recovery. Moreover,
while no attack is known against the EMSA-PKCS-v1_5 encoding method,
a gradual transition to EMSA-PSS is recommended as a precaution
against future developments.
8.2.1 Signature generation operation
RSASSA-PKCS1-V1_5-SIGN (K, M)
K signer's RSA private key
M message to be signed, an octet string
S signature, an octet string of length k, where k is the
length in octets of the RSA modulus n
Errors: "message too long"; "RSA modulus too short"
1. EMSA-PKCS1-v1_5 encoding: Apply the EMSA-PKCS1-v1_5 encoding
operation (Section 9.2) to the message M to produce an encoded
message EM of length k octets:
EM = EMSA-PKCS1-V1_5-ENCODE (M, k).
If the encoding operation outputs "message too long," output
"message too long" and stop. If the encoding operation outputs
"intended encoded message length too short," output "RSA modulus
too short" and stop.
Jonsson & Kaliski Informational [Page 33]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
2. RSA signature:
a. Convert the encoded message EM to an integer message
representative m (see Section 4.2):
m = OS2IP (EM).
b. Apply the RSASP1 signature primitive (Section 5.2.1) to the RSA
private key K and the message representative m to produce an
integer signature representative s:
s = RSASP1 (K, m).
c. Convert the signature representative s to a signature S of
length k octets (see Section 4.1):
S = I2OSP (s, k).
3. Output the signature S.
8.2.2 Signature verification operation
RSASSA-PKCS1-V1_5-VERIFY ((n, e), M, S)
(n, e) signer's RSA public key
M message whose signature is to be verified, an octet string
S signature to be verified, an octet string of length k, where
k is the length in octets of the RSA modulus n
"valid signature" or "invalid signature"
Errors: "message too long"; "RSA modulus too short"
1. Length checking: If the length of the signature S is not k octets,
output "invalid signature" and stop.
2. RSA verification:
a. Convert the signature S to an integer signature representative
s (see Section 4.2):
s = OS2IP (S).
Jonsson & Kaliski Informational [Page 34]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
b. Apply the RSAVP1 verification primitive (Section 5.2.2) to the
RSA public key (n, e) and the signature representative s to
produce an integer message representative m:
m = RSAVP1 ((n, e), s).
If RSAVP1 outputs "signature representative out of range,"
output "invalid signature" and stop.
c. Convert the message representative m to an encoded message EM
of length k octets (see Section 4.1):
EM' = I2OSP (m, k).
If I2OSP outputs "integer too large," output "invalid
signature" and stop.
3. EMSA-PKCS1-v1_5 encoding: Apply the EMSA-PKCS1-v1_5 encoding
operation (Section 9.2) to the message M to produce a second
encoded message EM' of length k octets:
EM' = EMSA-PKCS1-V1_5-ENCODE (M, k).
If the encoding operation outputs "message too long," output
"message too long" and stop. If the encoding operation outputs
"intended encoded message length too short," output "RSA modulus
too short" and stop.
4. Compare the encoded message EM and the second encoded message EM'.
If they are the same, output "valid signature"; otherwise, output
"invalid signature."
Note. Another way to implement the signature verification operation
is to apply a "decoding" operation (not specified in this document)
to the encoded message to recover the underlying hash value, and then
to compare it to a newly computed hash value. This has the advantage
that it requires less intermediate storage (two hash values rather
than two encoded messages), but the disadvantage that it requires
additional code.
9. Encoding methods for signatures with appendix
Encoding methods consist of operations that map between octet string
messages and octet string encoded messages, which are converted to
and from integer message representatives in the schemes. The integer
message representatives are processed via the primitives. The
encoding methods thus provide the connection between the schemes,
which process messages, and the primitives.
Jonsson & Kaliski Informational [Page 35]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
An encoding method for signatures with appendix, for the purposes of
this document, consists of an encoding operation and optionally a
verification operation. An encoding operation maps a message M to an
encoded message EM of a specified length. A verification operation
determines whether a message M and an encoded message EM are
consistent, i.e., whether the encoded message EM is a valid encoding
of the message M.
The encoding operation may introduce some randomness, so that
different applications of the encoding operation to the same message
will produce different encoded messages, which has benefits for
provable security. For such an encoding method, both an encoding and
a verification operation are needed unless the verifier can reproduce
the randomness (e.g., by obtaining the salt value from the signer).
For a deterministic encoding method only an encoding operation is
Two encoding methods for signatures with appendix are employed in the
signature schemes and are specified here: EMSA-PSS and EMSA-PKCS1-
9.1 EMSA-PSS
This encoding method is parameterized by the choice of hash function,
mask generation function, and salt length. These options should be
fixed for a given RSA key, except that the salt length can be
variable (see [31] for discussion). Suggested hash and mask
generation functions are given in Appendix B. The encoding method is
based on Bellare and Rogaway's Probabilistic Signature Scheme (PSS)
[4][5]. It is randomized and has an encoding operation and a
verification operation.
Jonsson & Kaliski Informational [Page 36]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
Figure 2 illustrates the encoding operation.
| M |
M' = |Padding1| mHash | salt |
+--------+----------+ V
DB = |Padding2|maskedseed| Hash
+--------+----------+ |
| |
V | +--+
xor <--- MGF <---| |bc|
| | +--+
| | |
V V V
EM = | maskedDB |maskedseed|bc|
Figure 2: EMSA-PSS encoding operation. Verification operation
follows reverse steps to recover salt, then forward steps to
recompute and compare H.
1. The encoding method defined here differs from the one in Bellare
and Rogaway's submission to IEEE P1363a [5] in three respects:
* It applies a hash function rather than a mask generation
function to the message. Even though the mask generation
function is based on a hash function, it seems more natural to
apply a hash function directly.
* The value that is hashed together with the salt value is the
string (0x)00 00 00 00 00 00 00 00 || mHash rather than the
message M itself. Here, mHash is the hash of M. Note that the
Jonsson & Kaliski Informational [Page 37]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
hash function is the same in both steps. See Note 3 below for
further discussion. (Also, the name "salt" is used instead of
"seed", as it is more reflective of the value's role.)
* The encoded message in EMSA-PSS has nine fixed bits; the first
bit is 0 and the last eight bits form a "trailer field", the
octet 0xbc. In the original scheme, only the first bit is
fixed. The rationale for the trailer field is for
compatibility with the Rabin-Williams IFSP-RW signature
primitive in IEEE Std 1363-2000 [26] and the corresponding
primitive in the draft ISO/IEC 9796-2 [29].
2. Assuming that the mask generation function is based on a hash
function, it is recommended that the hash function be the same as
the one that is applied to the message; see Section 8.1 for
further discussion.
3. Without compromising the security proof for RSASSA-PSS, one may
perform steps 1 and 2 of EMSA-PSS-ENCODE and EMSA-PSS-VERIFY (the
application of the hash function to the message) outside the
module that computes the rest of the signature operation, so that
mHash rather than the message M itself is input to the module. In
other words, the security proof for RSASSA-PSS still holds even if
an opponent can control the value of mHash. This is convenient if
the module has limited I/O bandwidth, e.g., a smart card. Note
that previous versions of PSS [4][5] did not have this property.
Of course, it may be desirable for other security reasons to have
the module process the full message. For instance, the module may
need to "see" what it is signing if it does not trust the
component that computes the hash value.
4. Typical salt lengths in octets are hLen (the length of the output
of the hash function Hash) and 0. In both cases the security of
RSASSA-PSS can be closely related to the hardness of inverting
RSAVP1. Bellare and Rogaway [4] give a tight lower bound for the
security of the original RSA-PSS scheme, which corresponds roughly
to the former case, while Coron [12] gives a lower bound for the
related Full Domain Hashing scheme, which corresponds roughly to
the latter case. In [13] Coron provides a general treatment with
various salt lengths ranging from 0 to hLen; see [27] for
discussion. See also [31], which adapts the security proofs in
[4][13] to address the differences between the original and the
present version of RSA-PSS as listed in Note 1 above.
5. As noted in IEEE P1363a [27], the use of randomization in
signature schemes - such as the salt value in EMSA-PSS - may
provide a "covert channel" for transmitting information other than
the message being signed. For more on covert channels, see [50].
Jonsson & Kaliski Informational [Page 38]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
9.1.1 Encoding operation
EMSA-PSS-ENCODE (M, emBits)
Hash hash function (hLen denotes the length in octets of the hash
function output)
MGF mask generation function
sLen intended length in octets of the salt
M message to be encoded, an octet string
emBits maximal bit length of the integer OS2IP (EM) (see Section
4.2), at least 8hLen + 8sLen + 9
EM encoded message, an octet string of length emLen = \ceil
Errors: "encoding error"; "message too long"
1. If the length of M is greater than the input limitation for the
hash function (2^61 - 1 octets for SHA-1), output "message too
long" and stop.
2. Let mHash = Hash(M), an octet string of length hLen.
3. If emLen < hLen + sLen + 2, output "encoding error" and stop.
4. Generate a random octet string salt of length sLen; if sLen = 0,
then salt is the empty string.
5. Let
M' = (0x)00 00 00 00 00 00 00 00 || mHash || salt;
M' is an octet string of length 8 + hLen + sLen with eight
initial zero octets.
6. Let H = Hash(M'), an octet string of length hLen.
7. Generate an octet string PS consisting of emLen - sLen - hLen - 2
zero octets. The length of PS may be 0.
8. Let DB = PS || 0x01 || salt; DB is an octet string of length
emLen - hLen - 1.
Jonsson & Kaliski Informational [Page 39]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
9. Let dbMask = MGF(H, emLen - hLen - 1).
10. Let maskedDB = DB \xor dbMask.
11. Set the leftmost 8emLen - emBits bits of the leftmost octet in
maskedDB to zero.
12. Let EM = maskedDB || H || 0xbc.
13. Output EM.
9.1.2 Verification operation
EMSA-PSS-VERIFY (M, EM, emBits)
Hash hash function (hLen denotes the length in octets of the hash
function output)
MGF mask generation function
sLen intended length in octets of the salt
M message to be verified, an octet string
EM encoded message, an octet string of length emLen = \ceil
emBits maximal bit length of the integer OS2IP (EM) (see Section
4.2), at least 8hLen + 8sLen + 9
"consistent" or "inconsistent"
1. If the length of M is greater than the input limitation for the
hash function (2^61 - 1 octets for SHA-1), output "inconsistent"
and stop.
2. Let mHash = Hash(M), an octet string of length hLen.
3. If emLen < hLen + sLen + 2, output "inconsistent" and stop.
4. If the rightmost octet of EM does not have hexadecimal value
0xbc, output "inconsistent" and stop.
5. Let maskedDB be the leftmost emLen - hLen - 1 octets of EM, and
let H be the next hLen octets.
Jonsson & Kaliski Informational [Page 40]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
6. If the leftmost 8emLen - emBits bits of the leftmost octet in
maskedDB are not all equal to zero, output "inconsistent" and
7. Let dbMask = MGF(H, emLen - hLen - 1).
8. Let DB = maskedDB \xor dbMask.
9. Set the leftmost 8emLen - emBits bits of the leftmost octet in DB
to zero.
10. If the emLen - hLen - sLen - 2 leftmost octets of DB are not zero
or if the octet at position emLen - hLen - sLen - 1 (the leftmost
position is "position 1") does not have hexadecimal value 0x01,
output "inconsistent" and stop.
11. Let salt be the last sLen octets of DB.
12. Let
M' = (0x)00 00 00 00 00 00 00 00 || mHash || salt ;
M' is an octet string of length 8 + hLen + sLen with eight
initial zero octets.
13. Let H' = Hash(M'), an octet string of length hLen.
14. If H = H', output "consistent." Otherwise, output "inconsistent."
9.2 EMSA-PKCS1-v1_5
This encoding method is deterministic and only has an encoding
EMSA-PKCS1-v1_5-ENCODE (M, emLen)
Hash hash function (hLen denotes the length in octets of the hash
function output)
M message to be encoded
emLen intended length in octets of the encoded message, at least
tLen + 11, where tLen is the octet length of the DER
encoding T of a certain value computed during the encoding
Jonsson & Kaliski Informational [Page 41]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
EM encoded message, an octet string of length emLen
"message too long"; "intended encoded message length too short"
1. Apply the hash function to the message M to produce a hash value
H = Hash(M).
If the hash function outputs "message too long," output "message
too long" and stop.
2. Encode the algorithm ID for the hash function and the hash value
into an ASN.1 value of type DigestInfo (see Appendix A.2.4) with
the Distinguished Encoding Rules (DER), where the type DigestInfo
has the syntax
DigestInfo ::= SEQUENCE {
digestAlgorithm AlgorithmIdentifier,
digest OCTET STRING
The first field identifies the hash function and the second
contains the hash value. Let T be the DER encoding of the
DigestInfo value (see the notes below) and let tLen be the length
in octets of T.
3. If emLen < tLen + 11, output "intended encoded message length too
short" and stop.
4. Generate an octet string PS consisting of emLen - tLen - 3 octets
with hexadecimal value 0xff. The length of PS will be at least 8
5. Concatenate PS, the DER encoding T, and other padding to form the
encoded message EM as
EM = 0x00 || 0x01 || PS || 0x00 || T.
6. Output EM.
Jonsson & Kaliski Informational [Page 42]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
1. For the six hash functions mentioned in Appendix B.1, the DER
encoding T of the DigestInfo value is equal to the following:
MD2: (0x)30 20 30 0c 06 08 2a 86 48 86 f7 0d 02 02 05 00 04
10 || H.
MD5: (0x)30 20 30 0c 06 08 2a 86 48 86 f7 0d 02 05 05 00 04
10 || H.
SHA-1: (0x)30 21 30 09 06 05 2b 0e 03 02 1a 05 00 04 14 || H.
SHA-256: (0x)30 31 30 0d 06 09 60 86 48 01 65 03 04 02 01 05 00
04 20 || H.
SHA-384: (0x)30 41 30 0d 06 09 60 86 48 01 65 03 04 02 02 05 00
04 30 || H.
SHA-512: (0x)30 51 30 0d 06 09 60 86 48 01 65 03 04 02 03 05 00
04 40 || H.
2. In version 1.5 of this document, T was defined as the BER
encoding, rather than the DER encoding, of the DigestInfo value.
In particular, it is possible - at least in theory - that the
verification operation defined in this document (as well as in
version 2.0) rejects a signature that is valid with respect to the
specification given in PKCS #1 v1.5. This occurs if other rules
than DER are applied to DigestInfo (e.g., an indefinite length
encoding of the underlying SEQUENCE type). While this is unlikely
to be a concern in practice, a cautious implementer may choose to
employ a verification operation based on a BER decoding operation
as specified in PKCS #1 v1.5. In this manner, compatibility with
any valid implementation based on PKCS #1 v1.5 is obtained. Such
a verification operation should indicate whether the underlying
BER encoding is a DER encoding and hence whether the signature is
valid with respect to the specification given in this document.
Jonsson & Kaliski Informational [Page 43]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
Appendix A. ASN.1 syntax
A.1 RSA key representation
This section defines ASN.1 object identifiers for RSA public and
private keys, and defines the types RSAPublicKey and RSAPrivateKey.
The intended application of these definitions includes X.509
certificates, PKCS #8 [46], and PKCS #12 [47].
The object identifier rsaEncryption identifies RSA public and private
keys as defined in Appendices A.1.1 and A.1.2. The parameters field
associated with this OID in a value of type AlgorithmIdentifier shall
have a value of type NULL.
rsaEncryption OBJECT IDENTIFIER ::= { pkcs-1 1 }
The definitions in this section have been extended to support multi-
prime RSA, but are backward compatible with previous versions.
A.1.1 RSA public key syntax
An RSA public key should be represented with the ASN.1 type
RSAPublicKey ::= SEQUENCE {
modulus INTEGER, -- n
publicExponent INTEGER -- e
The fields of type RSAPublicKey have the following meanings:
* modulus is the RSA modulus n.
* publicExponent is the RSA public exponent e.
Jonsson & Kaliski Informational [Page 44]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
A.1.2 RSA private key syntax
An RSA private key should be represented with the ASN.1 type
RSAPrivateKey ::= SEQUENCE {
version Version,
modulus INTEGER, -- n
publicExponent INTEGER, -- e
privateExponent INTEGER, -- d
prime1 INTEGER, -- p
prime2 INTEGER, -- q
exponent1 INTEGER, -- d mod (p-1)
exponent2 INTEGER, -- d mod (q-1)
coefficient INTEGER, -- (inverse of q) mod p
otherPrimeInfos OtherPrimeInfos OPTIONAL
The fields of type RSAPrivateKey have the following meanings:
* version is the version number, for compatibility with future
revisions of this document. It shall be 0 for this version of the
document, unless multi-prime is used, in which case it shall be 1.
Version ::= INTEGER { two-prime(0), multi(1) }
(CONSTRAINED BY
{-- version must be multi if otherPrimeInfos present --})
* modulus is the RSA modulus n.
* publicExponent is the RSA public exponent e.
* privateExponent is the RSA private exponent d.
* prime1 is the prime factor p of n.
* prime2 is the prime factor q of n.
* exponent1 is d mod (p - 1).
* exponent2 is d mod (q - 1).
* coefficient is the CRT coefficient q^(-1) mod p.
* otherPrimeInfos contains the information for the additional primes
r_3, ..., r_u, in order. It shall be omitted if version is 0 and
shall contain at least one instance of OtherPrimeInfo if version
is 1.
Jonsson & Kaliski Informational [Page 45]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
OtherPrimeInfos ::= SEQUENCE SIZE(1..MAX) OF OtherPrimeInfo
OtherPrimeInfo ::= SEQUENCE {
prime INTEGER, -- ri
exponent INTEGER, -- di
coefficient INTEGER -- ti
The fields of type OtherPrimeInfo have the following meanings:
* prime is a prime factor r_i of n, where i >= 3.
* exponent is d_i = d mod (r_i - 1).
* coefficient is the CRT coefficient t_i = (r_1 * r_2 * ... * r_(i-
1))^(-1) mod r_i.
Note. It is important to protect the RSA private key against both
disclosure and modification. Techniques for such protection are
outside the scope of this document. Methods for storing and
distributing private keys and other cryptographic data are described
in PKCS #12 and #15.
A.2 Scheme identification
This section defines object identifiers for the encryption and
signature schemes. The schemes compatible with PKCS #1 v1.5 have the
same definitions as in PKCS #1 v1.5. The intended application of
these definitions includes X.509 certificates and PKCS #7.
Here are type identifier definitions for the PKCS #1 OIDs:
PKCS1Algorithms ALGORITHM-IDENTIFIER ::= {
{ OID rsaEncryption PARAMETERS NULL } |
{ OID md2WithRSAEncryption PARAMETERS NULL } |
{ OID md5WithRSAEncryption PARAMETERS NULL } |
{ OID sha1WithRSAEncryption PARAMETERS NULL } |
{ OID sha256WithRSAEncryption PARAMETERS NULL } |
{ OID sha384WithRSAEncryption PARAMETERS NULL } |
{ OID sha512WithRSAEncryption PARAMETERS NULL } |
{ OID id-RSAES-OAEP PARAMETERS RSAES-OAEP-params } |
PKCS1PSourceAlgorithms |
{ OID id-RSASSA-PSS PARAMETERS RSASSA-PSS-params } ,
... -- Allows for future expansion --
Jonsson & Kaliski Informational [Page 46]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
A.2.1 RSAES-OAEP
The object identifier id-RSAES-OAEP identifies the RSAES-OAEP
encryption scheme.
id-RSAES-OAEP OBJECT IDENTIFIER ::= { pkcs-1 7 }
The parameters field associated with this OID in a value of type
AlgorithmIdentifier shall have a value of type RSAES-OAEP-params:
RSAES-OAEP-params ::= SEQUENCE {
hashAlgorithm [0] HashAlgorithm DEFAULT sha1,
maskGenAlgorithm [1] MaskGenAlgorithm DEFAULT mgf1SHA1,
pSourceAlgorithm [2] PSourceAlgorithm DEFAULT pSpecifiedEmpty
The fields of type RSAES-OAEP-params have the following meanings:
* hashAlgorithm identifies the hash function. It shall be an
algorithm ID with an OID in the set OAEP-PSSDigestAlgorithms.
For a discussion of supported hash functions, see Appendix B.1.
HashAlgorithm ::= AlgorithmIdentifier {
OAEP-PSSDigestAlgorithms ALGORITHM-IDENTIFIER ::= {
{ OID id-sha1 PARAMETERS NULL }|
{ OID id-sha256 PARAMETERS NULL }|
{ OID id-sha384 PARAMETERS NULL }|
{ OID id-sha512 PARAMETERS NULL },
... -- Allows for future expansion --
The default hash function is SHA-1:
sha1 HashAlgorithm ::= {
algorithm id-sha1,
parameters SHA1Parameters : NULL
SHA1Parameters ::= NULL
* maskGenAlgorithm identifies the mask generation function. It
shall be an algorithm ID with an OID in the set
PKCS1MGFAlgorithms, which for this version shall consist of
id-mgf1, identifying the MGF1 mask generation function (see
Appendix B.2.1). The parameters field associated with id-mgf1
Jonsson & Kaliski Informational [Page 47]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
shall be an algorithm ID with an OID in the set
OAEP-PSSDigestAlgorithms, identifying the hash function on which
MGF1 is based.
MaskGenAlgorithm ::= AlgorithmIdentifier {
PKCS1MGFAlgorithms ALGORITHM-IDENTIFIER ::= {
{ OID id-mgf1 PARAMETERS HashAlgorithm },
... -- Allows for future expansion --
The default mask generation function is MGF1 with SHA-1:
mgf1SHA1 MaskGenAlgorithm ::= {
algorithm id-mgf1,
parameters HashAlgorithm : sha1
* pSourceAlgorithm identifies the source (and possibly the value)
of the label L. It shall be an algorithm ID with an OID in the
set PKCS1PSourceAlgorithms, which for this version shall consist
of id-pSpecified, indicating that the label is specified
explicitly. The parameters field associated with id-pSpecified
shall have a value of type OCTET STRING, containing the
label. In previous versions of this specification, the term
"encoding parameters" was used rather than "label", hence the
name of the type below.
PSourceAlgorithm ::= AlgorithmIdentifier {
PKCS1PSourceAlgorithms ALGORITHM-IDENTIFIER ::= {
{ OID id-pSpecified PARAMETERS EncodingParameters },
... -- Allows for future expansion --
id-pSpecified OBJECT IDENTIFIER ::= { pkcs-1 9 }
EncodingParameters ::= OCTET STRING(SIZE(0..MAX))
Jonsson & Kaliski Informational [Page 48]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
The default label is an empty string (so that lHash will contain
the hash of the empty string):
pSpecifiedEmpty PSourceAlgorithm ::= {
algorithm id-pSpecified,
parameters EncodingParameters : emptyString
emptyString EncodingParameters ::= ''H
If all of the default values of the fields in RSAES-OAEP-params
are used, then the algorithm identifier will have the following
rSAES-OAEP-Default-Identifier RSAES-AlgorithmIdentifier ::= {
algorithm id-RSAES-OAEP,
parameters RSAES-OAEP-params : {
hashAlgorithm sha1,
maskGenAlgorithm mgf1SHA1,
pSourceAlgorithm pSpecifiedEmpty
RSAES-AlgorithmIdentifier ::= AlgorithmIdentifier {
A.2.2 RSAES-PKCS1-v1_5
The object identifier rsaEncryption (see Appendix A.1) identifies the
RSAES-PKCS1-v1_5 encryption scheme. The parameters field associated
with this OID in a value of type AlgorithmIdentifier shall have a
value of type NULL. This is the same as in PKCS #1 v1.5.
rsaEncryption OBJECT IDENTIFIER ::= { pkcs-1 1 }
A.2.3 RSASSA-PSS
The object identifier id-RSASSA-PSS identifies the RSASSA-PSS
encryption scheme.
id-RSASSA-PSS OBJECT IDENTIFIER ::= { pkcs-1 10 }
Jonsson & Kaliski Informational [Page 49]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
The parameters field associated with this OID in a value of type
AlgorithmIdentifier shall have a value of type RSASSA-PSS-params:
RSASSA-PSS-params ::= SEQUENCE {
hashAlgorithm [0] HashAlgorithm DEFAULT sha1,
maskGenAlgorithm [1] MaskGenAlgorithm DEFAULT mgf1SHA1,
saltLength [2] INTEGER DEFAULT 20,
trailerField [3] TrailerField DEFAULT trailerFieldBC
The fields of type RSASSA-PSS-params have the following meanings:
* hashAlgorithm identifies the hash function. It shall be an
algorithm ID with an OID in the set OAEP-PSSDigestAlgorithms (see
Appendix A.2.1). The default hash function is SHA-1.
* maskGenAlgorithm identifies the mask generation function. It
shall be an algorithm ID with an OID in the set
PKCS1MGFAlgorithms (see Appendix A.2.1). The default mask
generation function is MGF1 with SHA-1. For MGF1 (and more
generally, for other mask generation functions based on a hash
function), it is recommended that the underlying hash function be
the same as the one identified by hashAlgorithm; see Note 2 in
Section 9.1 for further comments.
* saltLength is the octet length of the salt. It shall be an
integer. For a given hashAlgorithm, the default value of
saltLength is the octet length of the hash value. Unlike the
other fields of type RSASSA-PSS-params, saltLength does not need
to be fixed for a given RSA key pair.
* trailerField is the trailer field number, for compatibility with
the draft IEEE P1363a [27]. It shall be 1 for this version of the
document, which represents the trailer field with hexadecimal
value 0xbc. Other trailer fields (including the trailer field
HashID || 0xcc in IEEE P1363a) are not supported in this document.
TrailerField ::= INTEGER { trailerFieldBC(1) }
If the default values of the hashAlgorithm, maskGenAlgorithm, and
trailerField fields of RSASSA-PSS-params are used, then the
algorithm identifier will have the following value:
Jonsson & Kaliski Informational [Page 50]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
rSASSA-PSS-Default-Identifier RSASSA-AlgorithmIdentifier ::= {
algorithm id-RSASSA-PSS,
parameters RSASSA-PSS-params : {
hashAlgorithm sha1,
maskGenAlgorithm mgf1SHA1,
saltLength 20,
trailerField trailerFieldBC
RSASSA-AlgorithmIdentifier ::=
AlgorithmIdentifier { {PKCS1Algorithms} }
Note. In some applications, the hash function underlying a signature
scheme is identified separately from the rest of the operations in
the signature scheme. For instance, in PKCS #7 [45], a hash function
identifier is placed before the message and a "digest encryption"
algorithm identifier (indicating the rest of the operations) is
carried with the signature. In order for PKCS #7 to support the
RSASSA-PSS signature scheme, an object identifier would need to be
defined for the operations in RSASSA-PSS after the hash function
(analogous to the RSAEncryption OID for the RSASSA-PKCS1-v1_5
scheme). S/MIME CMS [25] takes a different approach. Although a
hash function identifier is placed before the message, an algorithm
identifier for the full signature scheme may be carried with a CMS
signature (this is done for DSA signatures). Following this
convention, the id-RSASSA-PSS OID can be used to identify RSASSA-PSS
signatures in CMS. Since CMS is considered the successor to PKCS #7
and new developments such as the addition of support for RSASSA-PSS
will be pursued with respect to CMS rather than PKCS #7, an OID for
the "rest of" RSASSA-PSS is not defined in this version of PKCS #1.
A.2.4 RSASSA-PKCS1-v1_5
The object identifier for RSASSA-PKCS1-v1_5 shall be one of the
following. The choice of OID depends on the choice of hash
algorithm: MD2, MD5, SHA-1, SHA-256, SHA-384, or SHA-512. Note that
if either MD2 or MD5 is used, then the OID is just as in PKCS #1
v1.5. For each OID, the parameters field associated with this OID in
a value of type AlgorithmIdentifier shall have a value of type NULL.
The OID should be chosen in accordance with the following table:
Hash algorithm OID
MD2 md2WithRSAEncryption ::= {pkcs-1 2}
MD5 md5WithRSAEncryption ::= {pkcs-1 4}
SHA-1 sha1WithRSAEncryption ::= {pkcs-1 5}
SHA-256 sha256WithRSAEncryption ::= {pkcs-1 11}
Jonsson & Kaliski Informational [Page 51]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
SHA-384 sha384WithRSAEncryption ::= {pkcs-1 12}
SHA-512 sha512WithRSAEncryption ::= {pkcs-1 13}
The EMSA-PKCS1-v1_5 encoding method includes an ASN.1 value of type
DigestInfo, where the type DigestInfo has the syntax
DigestInfo ::= SEQUENCE {
digestAlgorithm DigestAlgorithm,
digest OCTET STRING
digestAlgorithm identifies the hash function and shall be an
algorithm ID with an OID in the set PKCS1-v1-5DigestAlgorithms. For
a discussion of supported hash functions, see Appendix B.1.
DigestAlgorithm ::=
AlgorithmIdentifier { {PKCS1-v1-5DigestAlgorithms} }
PKCS1-v1-5DigestAlgorithms ALGORITHM-IDENTIFIER ::= {
{ OID id-md2 PARAMETERS NULL }|
{ OID id-md5 PARAMETERS NULL }|
{ OID id-sha1 PARAMETERS NULL }|
{ OID id-sha256 PARAMETERS NULL }|
{ OID id-sha384 PARAMETERS NULL }|
{ OID id-sha512 PARAMETERS NULL }
Appendix B. Supporting techniques
This section gives several examples of underlying functions
supporting the encryption schemes in Section 7 and the encoding
methods in Section 9. A range of techniques is given here to allow
compatibility with existing applications as well as migration to new
techniques. While these supporting techniques are appropriate for
applications to implement, none of them is required to be
implemented. It is expected that profiles for PKCS #1 v2.1 will be
developed that specify particular supporting techniques.
This section also gives object identifiers for the supporting
B.1 Hash functions
Hash functions are used in the operations contained in Sections 7 and
9. Hash functions are deterministic, meaning that the output is
completely determined by the input. Hash functions take octet
strings of variable length, and generate fixed length octet strings.
Jonsson & Kaliski Informational [Page 52]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
The hash functions used in the operations contained in Sections 7 and
9 should generally be collision-resistant. This means that it is
infeasible to find two distinct inputs to the hash function that
produce the same output. A collision-resistant hash function also
has the desirable property of being one-way; this means that given an
output, it is infeasible to find an input whose hash is the specified
output. In addition to the requirements, the hash function should
yield a mask generation function (Appendix B.2) with pseudorandom
Six hash functions are given as examples for the encoding methods in
this document: MD2 [33], MD5 [41], SHA-1 [38], and the proposed
algorithms SHA-256, SHA-384, and SHA-512 [39]. For the RSAES-OAEP
encryption scheme and EMSA-PSS encoding method, only SHA-1 and SHA-
256/384/512 are recommended. For the EMSA-PKCS1-v1_5 encoding
method, SHA-1 or SHA-256/384/512 are recommended for new
applications. MD2 and MD5 are recommended only for compatibility
with existing applications based on PKCS #1 v1.5.
The object identifiers id-md2, id-md5, id-sha1, id-sha256, id-sha384,
and id-sha512, identify the respective hash functions:
id-md2 OBJECT IDENTIFIER ::= {
iso(1) member-body(2) us(840) rsadsi(113549)
digestAlgorithm(2) 2
id-md5 OBJECT IDENTIFIER ::= {
iso(1) member-body(2) us(840) rsadsi(113549)
digestAlgorithm(2) 5
id-sha1 OBJECT IDENTIFIER ::= {
iso(1) identified-organization(3) oiw(14) secsig(3)
algorithms(2) 26
id-sha256 OBJECT IDENTIFIER ::= {
joint-iso-itu-t(2) country(16) us(840) organization(1)
gov(101) csor(3) nistalgorithm(4) hashalgs(2) 1
id-sha384 OBJECT IDENTIFIER ::= {
joint-iso-itu-t(2) country(16) us(840) organization(1)
gov(101) csor(3) nistalgorithm(4) hashalgs(2) 2
Jonsson & Kaliski Informational [Page 53]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
id-sha512 OBJECT IDENTIFIER ::= {
joint-iso-itu-t(2) country(16) us(840) organization(1)
gov(101) csor(3) nistalgorithm(4) hashalgs(2) 3
The parameters field associated with id-md2 and id-md5 in a value of
type AlgorithmIdentifier shall have a value of type NULL.
The parameters field associated with id-sha1, id-sha256, id-sha384,
and id-sha512 should be omitted, but if present, shall have a value
of type NULL.
Note. Version 1.5 of PKCS #1 also allowed for the use of MD4 in
signature schemes. The cryptanalysis of MD4 has progressed
significantly in the intervening years. For example, Dobbertin [18]
demonstrated how to find collisions for MD4 and that the first two
rounds of MD4 are not one-way [20]. Because of these results and
others (e.g., [8]), MD4 is no longer recommended. There have also
been advances in the cryptanalysis of MD2 and MD5, although not
enough to warrant removal from existing applications. Rogier and
Chauvaud [43] demonstrated how to find collisions in a modified
version of MD2. No one has demonstrated how to find collisions for
the full MD5 algorithm, although partial results have been found
(e.g., [9][19]).
To address these concerns, SHA-1, SHA-256, SHA-384, or SHA-512 are
recommended for new applications. As of today, the best (known)
collision attacks against these hash functions are generic attacks
with complexity 2^(L/2), where L is the bit length of the hash
output. For the signature schemes in this document, a collision
attack is easily translated into a signature forgery. Therefore, the
value L / 2 should be at least equal to the desired security level in
bits of the signature scheme (a security level of B bits means that
the best attack has complexity 2^B). The same rule of thumb can be
applied to RSAES-OAEP; it is recommended that the bit length of the
seed (which is equal to the bit length of the hash output) be twice
the desired security level in bits.
B.2 Mask generation functions
A mask generation function takes an octet string of variable length
and a desired output length as input, and outputs an octet string of
the desired length. There may be restrictions on the length of the
input and output octet strings, but such bounds are generally very
large. Mask generation functions are deterministic; the octet string
output is completely determined by the input octet string. The
output of a mask generation function should be pseudorandom: Given
one part of the output but not the input, it should be infeasible to
Jonsson & Kaliski Informational [Page 54]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
predict another part of the output. The provable security of RSAES-
OAEP and RSASSA-PSS relies on the random nature of the output of the
mask generation function, which in turn relies on the random nature
of the underlying hash.
One mask generation function is given here: MGF1, which is based on a
hash function. MGF1 coincides with the mask generation functions
defined in IEEE Std 1363-2000 [26] and the draft ANSI X9.44 [1].
Future versions of this document may define other mask generation
B.2.1 MGF1
MGF1 is a Mask Generation Function based on a hash function.
MGF1 (mgfSeed, maskLen)
Hash hash function (hLen denotes the length in octets of the hash
function output)
mgfSeed seed from which mask is generated, an octet string
maskLen intended length in octets of the mask, at most 2^32 hLen
mask mask, an octet string of length maskLen
Error: "mask too long"
1. If maskLen > 2^32 hLen, output "mask too long" and stop.
2. Let T be the empty octet string.
3. For counter from 0 to \ceil (maskLen / hLen) - 1, do the
a. Convert counter to an octet string C of length 4 octets (see
Section 4.1):
C = I2OSP (counter, 4) .
b. Concatenate the hash of the seed mgfSeed and C to the octet
string T:
T = T || Hash(mgfSeed || C) .
Jonsson & Kaliski Informational [Page 55]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
4. Output the leading maskLen octets of T as the octet string mask.
The object identifier id-mgf1 identifies the MGF1 mask generation
id-mgf1 OBJECT IDENTIFIER ::= { pkcs-1 8 }
The parameters field associated with this OID in a value of type
AlgorithmIdentifier shall have a value of type hashAlgorithm,
identifying the hash function on which MGF1 is based.
Appendix C. ASN.1 module
PKCS-1 {
iso(1) member-body(2) us(840) rsadsi(113549) pkcs(1) pkcs-1(1)
modules(0) pkcs-1(1)
-- $ Revision: 2.1r1 $
-- This module has been checked for conformance with the ASN.1
-- standard by the OSS ASN.1 Tools
DEFINITIONS EXPLICIT TAGS ::=
-- EXPORTS ALL
-- All types and values defined in this module are exported for use
-- in other ASN.1 modules.
id-sha256, id-sha384, id-sha512
FROM NIST-SHA2 {
joint-iso-itu-t(2) country(16) us(840) organization(1)
gov(101) csor(3) nistalgorithm(4) modules(0) sha2(1)
-- ============================
-- Basic object identifiers
-- ============================
-- The DER encoding of this in hexadecimal is:
-- (0x)06 08
-- 2A 86 48 86 F7 0D 01 01
pkcs-1 OBJECT IDENTIFIER ::= {
Jonsson & Kaliski Informational [Page 56]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
iso(1) member-body(2) us(840) rsadsi(113549) pkcs(1) 1
-- When rsaEncryption is used in an AlgorithmIdentifier the
-- parameters MUST be present and MUST be NULL.
rsaEncryption OBJECT IDENTIFIER ::= { pkcs-1 1 }
-- When id-RSAES-OAEP is used in an AlgorithmIdentifier the
-- parameters MUST be present and MUST be RSAES-OAEP-params.
id-RSAES-OAEP OBJECT IDENTIFIER ::= { pkcs-1 7 }
-- When id-pSpecified is used in an AlgorithmIdentifier the
-- parameters MUST be an OCTET STRING.
id-pSpecified OBJECT IDENTIFIER ::= { pkcs-1 9 }
-- When id-RSASSA-PSS is used in an AlgorithmIdentifier the
-- parameters MUST be present and MUST be RSASSA-PSS-params.
id-RSASSA-PSS OBJECT IDENTIFIER ::= { pkcs-1 10 }
-- When the following OIDs are used in an AlgorithmIdentifier the
-- parameters MUST be present and MUST be NULL.
md2WithRSAEncryption OBJECT IDENTIFIER ::= { pkcs-1 2 }
md5WithRSAEncryption OBJECT IDENTIFIER ::= { pkcs-1 4 }
sha1WithRSAEncryption OBJECT IDENTIFIER ::= { pkcs-1 5 }
sha256WithRSAEncryption OBJECT IDENTIFIER ::= { pkcs-1 11 }
sha384WithRSAEncryption OBJECT IDENTIFIER ::= { pkcs-1 12 }
sha512WithRSAEncryption OBJECT IDENTIFIER ::= { pkcs-1 13 }
-- This OID really belongs in a module with the secsig OIDs.
id-sha1 OBJECT IDENTIFIER ::= {
iso(1) identified-organization(3) oiw(14) secsig(3)
algorithms(2) 26
-- OIDs for MD2 and MD5, allowed only in EMSA-PKCS1-v1_5.
Jonsson & Kaliski Informational [Page 57]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
id-md2 OBJECT IDENTIFIER ::= {
iso(1) member-body(2) us(840) rsadsi(113549) digestAlgorithm(2) 2
id-md5 OBJECT IDENTIFIER ::= {
iso(1) member-body(2) us(840) rsadsi(113549) digestAlgorithm(2) 5
-- When id-mgf1 is used in an AlgorithmIdentifier the parameters MUST
-- be present and MUST be a HashAlgorithm, for example sha1.
id-mgf1 OBJECT IDENTIFIER ::= { pkcs-1 8 }
-- ================
-- Useful types
-- ================
ALGORITHM-IDENTIFIER ::= CLASS {
&id OBJECT IDENTIFIER UNIQUE,
&Type OPTIONAL
WITH SYNTAX { OID &id [PARAMETERS &Type] }
-- Note: the parameter InfoObjectSet in the following definitions
-- allows a distinct information object set to be specified for sets
-- of algorithms such as:
-- DigestAlgorithms ALGORITHM-IDENTIFIER ::= {
-- { OID id-md2 PARAMETERS NULL }|
-- { OID id-md5 PARAMETERS NULL }|
-- { OID id-sha1 PARAMETERS NULL }
-- }
AlgorithmIdentifier { ALGORITHM-IDENTIFIER:InfoObjectSet } ::=
algorithm ALGORITHM-IDENTIFIER.&id({InfoObjectSet}),
-- ==============
-- Algorithms
-- ==============
Jonsson & Kaliski Informational [Page 58]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
-- Allowed EME-OAEP and EMSA-PSS digest algorithms.
OAEP-PSSDigestAlgorithms ALGORITHM-IDENTIFIER ::= {
{ OID id-sha1 PARAMETERS NULL }|
{ OID id-sha256 PARAMETERS NULL }|
{ OID id-sha384 PARAMETERS NULL }|
{ OID id-sha512 PARAMETERS NULL },
... -- Allows for future expansion --
-- Allowed EMSA-PKCS1-v1_5 digest algorithms.
PKCS1-v1-5DigestAlgorithms ALGORITHM-IDENTIFIER ::= {
{ OID id-md2 PARAMETERS NULL }|
{ OID id-md5 PARAMETERS NULL }|
{ OID id-sha1 PARAMETERS NULL }|
{ OID id-sha256 PARAMETERS NULL }|
{ OID id-sha384 PARAMETERS NULL }|
{ OID id-sha512 PARAMETERS NULL }
-- When id-md2 and id-md5 are used in an AlgorithmIdentifier the
-- parameters MUST be present and MUST be NULL.
-- When id-sha1, id-sha256, id-sha384 and id-sha512 are used in an
-- AlgorithmIdentifier the parameters (which are optional) SHOULD
-- be omitted. However, an implementation MUST also accept
-- AlgorithmIdentifier values where the parameters are NULL.
sha1 HashAlgorithm ::= {
algorithm id-sha1,
parameters SHA1Parameters : NULL -- included for compatibility
-- with existing implementations
HashAlgorithm ::= AlgorithmIdentifier { {OAEP-PSSDigestAlgorithms} }
SHA1Parameters ::= NULL
-- Allowed mask generation function algorithms.
-- If the identifier is id-mgf1, the parameters are a HashAlgorithm.
PKCS1MGFAlgorithms ALGORITHM-IDENTIFIER ::= {
{ OID id-mgf1 PARAMETERS HashAlgorithm },
... -- Allows for future expansion --
Jonsson & Kaliski Informational [Page 59]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
-- Default AlgorithmIdentifier for id-RSAES-OAEP.maskGenAlgorithm and
-- id-RSASSA-PSS.maskGenAlgorithm.
mgf1SHA1 MaskGenAlgorithm ::= {
algorithm id-mgf1,
parameters HashAlgorithm : sha1
MaskGenAlgorithm ::= AlgorithmIdentifier { {PKCS1MGFAlgorithms} }
-- Allowed algorithms for pSourceAlgorithm.
PKCS1PSourceAlgorithms ALGORITHM-IDENTIFIER ::= {
{ OID id-pSpecified PARAMETERS EncodingParameters },
... -- Allows for future expansion --
EncodingParameters ::= OCTET STRING(SIZE(0..MAX))
-- This identifier means that the label L is an empty string, so the
-- digest of the empty string appears in the RSA block before
-- masking.
pSpecifiedEmpty PSourceAlgorithm ::= {
algorithm id-pSpecified,
parameters EncodingParameters : emptyString
PSourceAlgorithm ::= AlgorithmIdentifier { {PKCS1PSourceAlgorithms} }
emptyString EncodingParameters ::= ''H
-- Type identifier definitions for the PKCS #1 OIDs.
PKCS1Algorithms ALGORITHM-IDENTIFIER ::= {
{ OID rsaEncryption PARAMETERS NULL } |
{ OID md2WithRSAEncryption PARAMETERS NULL } |
{ OID md5WithRSAEncryption PARAMETERS NULL } |
{ OID sha1WithRSAEncryption PARAMETERS NULL } |
{ OID sha256WithRSAEncryption PARAMETERS NULL } |
{ OID sha384WithRSAEncryption PARAMETERS NULL } |
{ OID sha512WithRSAEncryption PARAMETERS NULL } |
{ OID id-RSAES-OAEP PARAMETERS RSAES-OAEP-params } |
PKCS1PSourceAlgorithms |
Jonsson & Kaliski Informational [Page 60]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
{ OID id-RSASSA-PSS PARAMETERS RSASSA-PSS-params } ,
... -- Allows for future expansion --
-- ===================
-- Main structures
-- ===================
RSAPublicKey ::= SEQUENCE {
modulus INTEGER, -- n
publicExponent INTEGER -- e
-- Representation of RSA private key with information for the CRT
-- algorithm.
RSAPrivateKey ::= SEQUENCE {
version Version,
modulus INTEGER, -- n
publicExponent INTEGER, -- e
privateExponent INTEGER, -- d
prime1 INTEGER, -- p
prime2 INTEGER, -- q
exponent1 INTEGER, -- d mod (p-1)
exponent2 INTEGER, -- d mod (q-1)
coefficient INTEGER, -- (inverse of q) mod p
otherPrimeInfos OtherPrimeInfos OPTIONAL
Version ::= INTEGER { two-prime(0), multi(1) }
(CONSTRAINED BY {
-- version must be multi if otherPrimeInfos present --
OtherPrimeInfos ::= SEQUENCE SIZE(1..MAX) OF OtherPrimeInfo
OtherPrimeInfo ::= SEQUENCE {
prime INTEGER, -- ri
exponent INTEGER, -- di
coefficient INTEGER -- ti
-- AlgorithmIdentifier.parameters for id-RSAES-OAEP.
-- Note that the tags in this Sequence are explicit.
RSAES-OAEP-params ::= SEQUENCE {
Jonsson & Kaliski Informational [Page 61]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
hashAlgorithm [0] HashAlgorithm DEFAULT sha1,
maskGenAlgorithm [1] MaskGenAlgorithm DEFAULT mgf1SHA1,
pSourceAlgorithm [2] PSourceAlgorithm DEFAULT pSpecifiedEmpty
-- Identifier for default RSAES-OAEP algorithm identifier.
-- The DER Encoding of this is in hexadecimal:
-- (0x)30 0D
-- 06 09
-- 2A 86 48 86 F7 0D 01 01 07
-- 30 00
-- Notice that the DER encoding of default values is "empty".
rSAES-OAEP-Default-Identifier RSAES-AlgorithmIdentifier ::= {
algorithm id-RSAES-OAEP,
parameters RSAES-OAEP-params : {
hashAlgorithm sha1,
maskGenAlgorithm mgf1SHA1,
pSourceAlgorithm pSpecifiedEmpty
RSAES-AlgorithmIdentifier ::=
AlgorithmIdentifier { {PKCS1Algorithms} }
-- AlgorithmIdentifier.parameters for id-RSASSA-PSS.
-- Note that the tags in this Sequence are explicit.
RSASSA-PSS-params ::= SEQUENCE {
hashAlgorithm [0] HashAlgorithm DEFAULT sha1,
maskGenAlgorithm [1] MaskGenAlgorithm DEFAULT mgf1SHA1,
saltLength [2] INTEGER DEFAULT 20,
trailerField [3] TrailerField DEFAULT trailerFieldBC
TrailerField ::= INTEGER { trailerFieldBC(1) }
-- Identifier for default RSASSA-PSS algorithm identifier
-- The DER Encoding of this is in hexadecimal:
-- (0x)30 0D
-- 06 09
-- 2A 86 48 86 F7 0D 01 01 0A
-- 30 00
-- Notice that the DER encoding of default values is "empty".
Jonsson & Kaliski Informational [Page 62]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
rSASSA-PSS-Default-Identifier RSASSA-AlgorithmIdentifier ::= {
algorithm id-RSASSA-PSS,
parameters RSASSA-PSS-params : {
hashAlgorithm sha1,
maskGenAlgorithm mgf1SHA1,
saltLength 20,
trailerField trailerFieldBC
RSASSA-AlgorithmIdentifier ::=
AlgorithmIdentifier { {PKCS1Algorithms} }
-- Syntax for the EMSA-PKCS1-v1_5 hash identifier.
DigestInfo ::= SEQUENCE {
digestAlgorithm DigestAlgorithm,
digest OCTET STRING
DigestAlgorithm ::=
AlgorithmIdentifier { {PKCS1-v1-5DigestAlgorithms} }
END -- PKCS1Definitions
Appendix D. Intellectual Property Considerations
The RSA public-key cryptosystem is described in U.S. Patent
4,405,829, which expired on September 20, 2000. RSA Security Inc.
makes no other patent claims on the constructions described in this
document, although specific underlying techniques may be covered.
Multi-prime RSA is described in U.S. Patent 5,848,159.
The University of California has indicated that it has a patent
pending on the PSS signature scheme [5]. It has also provided a
letter to the IEEE P1363 working group stating that if the PSS
signature scheme is included in an IEEE standard, "the University of
California will, when that standard is adopted, FREELY license any
conforming implementation of PSS as a technique for achieving a
digital signature with appendix" [23]. The PSS signature scheme is
specified in the IEEE P1363a draft [27], which was in ballot
resolution when this document was published.
Jonsson & Kaliski Informational [Page 63]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
License to copy this document is granted provided that it is
identified as "RSA Security Inc. Public-Key Cryptography Standards
(PKCS)" in all material mentioning or referencing this document.
RSA Security Inc. makes no other representations regarding
intellectual property claims by other parties. Such determination is
the responsibility of the user.
Appendix E. Revision history
Versions 1.0 - 1.3
Versions 1.0 - 1.3 were distributed to participants in RSA Data
Security, Inc.'s Public-Key Cryptography Standards meetings in
February and March 1991.
Version 1.4
Version 1.4 was part of the June 3, 1991 initial public release of
PKCS. Version 1.4 was published as NIST/OSI Implementors'
Workshop document SEC-SIG-91-18.
Version 1.5
Version 1.5 incorporated several editorial changes, including
updates to the references and the addition of a revision history.
The following substantive changes were made:
- Section 10: "MD4 with RSA" signature and verification processes
were added.
- Section 11: md4WithRSAEncryption object identifier was added.
Version 1.5 was republished as IETF RFC 2313.
Version 2.0
Version 2.0 incorporated major editorial changes in terms of the
document structure and introduced the RSAES-OAEP encryption
scheme. This version continued to support the encryption and
signature processes in version 1.5, although the hash algorithm
MD4 was no longer allowed due to cryptanalytic advances in the
intervening years. Version 2.0 was republished as IETF RFC 2437
Jonsson & Kaliski Informational [Page 64]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
Version 2.1
Version 2.1 introduces multi-prime RSA and the RSASSA-PSS
signature scheme with appendix along with several editorial
improvements. This version continues to support the schemes in
version 2.0.
Appendix F: References
[1] ANSI X9F1 Working Group. ANSI X9.44 Draft D2: Key
Establishment Using Integer Factorization Cryptography.
Working Draft, March 2002.
[2] M. Bellare, A. Desai, D. Pointcheval and P. Rogaway. Relations
Among Notions of Security for Public-Key Encryption Schemes.
In H. Krawczyk, editor, Advances in Cryptology - Crypto '98,
volume 1462 of Lecture Notes in Computer Science, pp. 26 - 45.
Springer Verlag, 1998.
[3] M. Bellare and P. Rogaway. Optimal Asymmetric Encryption - How
to Encrypt with RSA. In A. De Santis, editor, Advances in
Cryptology - Eurocrypt '94, volume 950 of Lecture Notes in
Computer Science, pp. 92 - 111. Springer Verlag, 1995.
[4] M. Bellare and P. Rogaway. The Exact Security of Digital
Signatures - How to Sign with RSA and Rabin. In U. Maurer,
editor, Advances in Cryptology - Eurocrypt '96, volume 1070 of
Lecture Notes in Computer Science, pp. 399 - 416. Springer
Verlag, 1996.
[5] M. Bellare and P. Rogaway. PSS: Provably Secure Encoding
Method for Digital Signatures. Submission to IEEE P1363
working group, August 1998. Available from
[6] D. Bleichenbacher. Chosen Ciphertext Attacks Against Protocols
Based on the RSA Encryption Standard PKCS #1. In H. Krawczyk,
editor, Advances in Cryptology - Crypto '98, volume 1462 of
Lecture Notes in Computer Science, pp. 1 - 12. Springer
Verlag, 1998.
[7] D. Bleichenbacher, B. Kaliski and J. Staddon. Recent Results
on PKCS #1: RSA Encryption Standard. RSA Laboratories'
Bulletin No. 7, June 1998.
Jonsson & Kaliski Informational [Page 65]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
[8] B. den Boer and A. Bosselaers. An Attack on the Last Two
Rounds of MD4. In J. Feigenbaum, editor, Advances in
Cryptology - Crypto '91, volume 576 of Lecture Notes in
Computer Science, pp. 194 - 203. Springer Verlag, 1992.
[9] B. den Boer and A. Bosselaers. Collisions for the Compression
Function of MD5. In T. Helleseth, editor, Advances in
Cryptology - Eurocrypt '93, volume 765 of Lecture Notes in
Computer Science, pp. 293 - 304. Springer Verlag, 1994.
[10] D. Coppersmith, M. Franklin, J. Patarin and M. Reiter. Low-
Exponent RSA with Related Messages. In U. Maurer, editor,
Advances in Cryptology - Eurocrypt '96, volume 1070 of Lecture
Notes in Computer Science, pp. 1 - 9. Springer Verlag, 1996.
[11] D. Coppersmith, S. Halevi and C. Jutla. ISO 9796-1 and the New
Forgery Strategy. Presented at the rump session of Crypto '99,
August 1999.
[12] J.-S. Coron. On the Exact Security of Full Domain Hashing. In
M. Bellare, editor, Advances in Cryptology - Crypto 2000,
volume 1880 of Lecture Notes in Computer Science, pp. 229 -
235. Springer Verlag, 2000.
[13] J.-S. Coron. Optimal Security Proofs for PSS and Other
Signature Schemes. In L. Knudsen, editor, Advances in
Cryptology - Eurocrypt 2002, volume 2332 of Lecture Notes in
Computer Science, pp. 272 - 287. Springer Verlag, 2002.
[14] J.-S. Coron, M. Joye, D. Naccache and P. Paillier. New Attacks
on PKCS #1 v1.5 Encryption. In B. Preneel, editor, Advances in
Cryptology - Eurocrypt 2000, volume 1807 of Lecture Notes in
Computer Science, pp. 369 - 379. Springer Verlag, 2000.
[15] J.-S. Coron, D. Naccache and J. P. Stern. On the Security of
RSA Padding. In M. Wiener, editor, Advances in Cryptology -
Crypto '99, volume 1666 of Lecture Notes in Computer Science,
pp. 1 - 18. Springer Verlag, 1999.
[16] Y. Desmedt and A.M. Odlyzko. A Chosen Text Attack on the RSA
Cryptosystem and Some Discrete Logarithm Schemes. In H.C.
Williams, editor, Advances in Cryptology - Crypto '85, volume
218 of Lecture Notes in Computer Science, pp. 516 - 522.
Springer Verlag, 1986.
[17] Dierks, T. and C. Allen, "The TLS Protocol, Version 1.0", RFC
2246, January 1999.
Jonsson & Kaliski Informational [Page 66]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
[18] H. Dobbertin. Cryptanalysis of MD4. In D. Gollmann, editor,
Fast Software Encryption '96, volume 1039 of Lecture Notes in
Computer Science, pp. 55 - 72. Springer Verlag, 1996.
[19] H. Dobbertin. Cryptanalysis of MD5 Compress. Presented at the
rump session of Eurocrypt '96, May 1996.
[20] H. Dobbertin. The First Two Rounds of MD4 are Not One-Way. In
S. Vaudenay, editor, Fast Software Encryption '98, volume 1372
in Lecture Notes in Computer Science, pp. 284 - 292. Springer
Verlag, 1998.
[21] E. Fujisaki, T. Okamoto, D. Pointcheval and J. Stern. RSA-OAEP
is Secure under the RSA Assumption. In J. Kilian, editor,
Advances in Cryptology - Crypto 2001, volume 2139 of Lecture
Notes in Computer Science, pp. 260 - 274. Springer Verlag,
[22] H. Garner. The Residue Number System. IRE Transactions on
Electronic Computers, EC-8 (6), pp. 140 - 147, June 1959.
[23] M.L. Grell. Re: Encoding Methods PSS/PSS-R. Letter to IEEE
P1363 working group, University of California, June 15, 1999.
Available from
[24] J. Haastad. Solving Simultaneous Modular Equations of Low
Degree. SIAM Journal of Computing, volume 17, pp. 336 - 341,
[25] Housley, R., "Cryptographic Message Syntax (CMS)", RFC 3369,
August 2002. Housley, R., "Cryptographic Message Syntax (CMS)
Algorithms", RFC 3370, August 2002.
[26] IEEE Std 1363-2000: Standard Specifications for Public Key
Cryptography. IEEE, August 2000.
[27] IEEE P1363 working group. IEEE P1363a D11: Draft Standard
Specifications for Public Key Cryptography -- Amendment 1:
Additional Techniques. December 16, 2002. Available from
[28] ISO/IEC 9594-8:1997: Information technology - Open Systems
Interconnection - The Directory: Authentication Framework.
Jonsson & Kaliski Informational [Page 67]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
[29] ISO/IEC FDIS 9796-2: Information Technology - Security
Techniques - Digital Signature Schemes Giving Message Recovery
- Part 2: Integer Factorization Based Mechanisms. Final Draft
International Standard, December 2001.
[30] ISO/IEC 18033-2: Information Technology - Security Techniques -
Encryption Algorithms - Part 2: Asymmetric Ciphers. V. Shoup,
editor, Text for 2nd Working Draft, January 2002.
[31] J. Jonsson. Security Proof for the RSA-PSS Signature Scheme
(extended abstract). Second Open NESSIE Workshop. September
2001. Full version available from
[32] J. Jonsson and B. Kaliski. On the Security of RSA Encryption
in TLS. In M. Yung, editor, Advances in Cryptology - CRYPTO
2002, vol. 2442 of Lecture Notes in Computer Science, pp. 127 -
142. Springer Verlag, 2002.
[33] Kaliski, B., "The MD2 Message-Digest Algorithm", RFC 1319,
April 1992.
[34] B. Kaliski. On Hash Function Identification in Signature
Schemes. In B. Preneel, editor, RSA Conference 2002,
Cryptographers' Track, volume 2271 of Lecture Notes in Computer
Science, pp. 1 - 16. Springer Verlag, 2002.
[35] Kaliski, B. and J. Staddon, "PKCS #1: RSA Cryptography
Specifications Version 2.0", RFC 2437, October 1998.
[36] J. Manger. A Chosen Ciphertext Attack on RSA Optimal
Asymmetric Encryption Padding (OAEP) as Standardized in PKCS #1
v2.0. In J. Kilian, editor, Advances in Cryptology - Crypto
2001, volume 2139 of Lecture Notes in Computer Science, pp. 260
- 274. Springer Verlag, 2001.
[37] A. Menezes, P. van Oorschot and S. Vanstone. Handbook of
Applied Cryptography. CRC Press, 1996.
[38] National Institute of Standards and Technology (NIST). FIPS
Publication 180-1: Secure Hash Standard. April 1994.
[39] National Institute of Standards and Technology (NIST). Draft
FIPS 180-2: Secure Hash Standard. Draft, May 2001. Available
from http://www.nist.gov/sha/.
Jonsson & Kaliski Informational [Page 68]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
[40] J.-J. Quisquater and C. Couvreur. Fast Decipherment Algorithm
for RSA Public-Key Cryptosystem. Electronics Letters, 18 (21),
pp. 905 - 907, October 1982.
[41] Rivest, R., "The MD5 Message-Digest Algorithm", RFC 1321, April
[42] R. Rivest, A. Shamir and L. Adleman. A Method for Obtaining
Digital Signatures and Public-Key Cryptosystems.
Communications of the ACM, 21 (2), pp. 120-126, February 1978.
[43] N. Rogier and P. Chauvaud. The Compression Function of MD2 is
not Collision Free. Presented at Selected Areas of
Cryptography '95. Carleton University, Ottawa, Canada. May
[44] RSA Laboratories. PKCS #1 v2.0: RSA Encryption Standard.
October 1998.
[45] RSA Laboratories. PKCS #7 v1.5: Cryptographic Message Syntax
Standard. November 1993. (Republished as IETF RFC 2315.)
[46] RSA Laboratories. PKCS #8 v1.2: Private-Key Information Syntax
Standard. November 1993.
[47] RSA Laboratories. PKCS #12 v1.0: Personal Information Exchange
Syntax Standard. June 1999.
[48] V. Shoup. OAEP Reconsidered. In J. Kilian, editor, Advances
in Cryptology - Crypto 2001, volume 2139 of Lecture Notes in
Computer Science, pp. 239 - 259. Springer Verlag, 2001.
[49] R. D. Silverman. A Cost-Based Security Analysis of Symmetric
and Asymmetric Key Lengths. RSA Laboratories Bulletin No. 13,
April 2000. Available from
[50] G. J. Simmons. Subliminal communication is easy using the DSA.
In T. Helleseth, editor, Advances in Cryptology - Eurocrypt
'93, volume 765 of Lecture Notes in Computer Science, pp. 218-
232. Springer-Verlag, 1993.
Jonsson & Kaliski Informational [Page 69]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
Appendix G: About PKCS
The Public-Key Cryptography Standards are specifications produced by
RSA Laboratories in cooperation with secure systems developers
worldwide for the purpose of accelerating the deployment of
public-key cryptography. First published in 1991 as a result of
meetings with a small group of early adopters of public-key
technology, the PKCS documents have become widely referenced and
implemented. Contributions from the PKCS series have become part of
many formal and de facto standards, including ANSI X9 and IEEE P1363
documents, PKIX, SET, S/MIME, SSL/TLS, and WAP/WTLS.
Further development of PKCS occurs through mailing list discussions
and occasional workshops, and suggestions for improvement are
welcome. For more information, contact:
PKCS Editor
RSA Laboratories
174 Middlesex Turnpike
Bedford, MA 01730 USA
Appendix H: Corrections Made During RFC Publication Process
The following corrections were made in converting the PKCS #1 v2.1
document to this RFC:
* The requirement that the parameters in an AlgorithmIdentifier
value for id-sha1, id-sha256, id-sha384, and id-sha512 be NULL was
changed to a recommendation that the parameters be omitted (while
still allowing the parameters to be NULL). This is to align with
the definitions originally promulgated by NIST. Implementations
MUST accept AlgorithmIdentifier values both without parameters and
with NULL parameters.
* The notes after RSADP and RSASP1 (Secs. 5.1.2 and 5.2.1) were
corrected to refer to step 2.b rather than 2.a.
* References [25], [27] and [32] were updated to reflect new
publication data.
These corrections will be reflected in future editions of PKCS #1
Security Considerations
Security issues are discussed throughout this memo.
Jonsson & Kaliski Informational [Page 70]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
This document is based on a contribution of RSA Laboratories, the
research center of RSA Security Inc. Any substantial use of the text
from this document must acknowledge RSA Security Inc. RSA Security
Inc. requests that all material mentioning or referencing this
document identify this as "RSA Security Inc. PKCS #1 v2.1".
Authors' Addresses
Jakob Jonsson
Philipps-Universitaet Marburg
Fachbereich Mathematik und Informatik
Hans Meerwein Strasse, Lahnberge
DE-35032 Marburg
Phone: +49 6421 28 25672
EMail: jonsson@mathematik.uni-marburg.de
Burt Kaliski
RSA Laboratories
174 Middlesex Turnpike
Bedford, MA 01730 USA
Phone: +1 781 515 7073
EMail: bkaliski@rsasecurity.com
Jonsson & Kaliski Informational [Page 71]
RFC 3447 PKCS #1: RSA Cryptography Specifications February 2003
Full Copyright Statement
Copyright (C) The Internet Society 2003. All Rights Reserved.
This document and translations of it may be copied and furnished to
others provided that the above copyright notice and this paragraph
are included on all such copies. However, this document itself may
not be modified in any way, such as by removing the copyright notice
or references to the Internet Society or other Internet
organizations, except as required to translate it into languages
other than English.
The limited permissions granted above are perpetual and will not be
revoked by the Internet Society or its successors or assigns.
This document and the information contained herein is provided on an
"AS IS" basis and THE INTERNET SOCIETY AND THE INTERNET ENGINEERING
BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION
Funding for the RFC Editor function is currently provided by the
Internet Society.
Jonsson & Kaliski Informational [Page 72] | {"url":"https://datatracker.ietf.org/doc/html/rfc3447.html","timestamp":"2024-11-06T15:04:21Z","content_type":"text/html","content_length":"208590","record_id":"<urn:uuid:eac85f79-4924-45a6-9fc0-876f1367b8b1>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00776.warc.gz"} |
DOI Number
In this paper macrodiversity system consisting of two microdiversity SC (Selection Combiner) receivers and one macrodiversity SC receiver are analyzed. Independent κ-μ fading and correlated slow
Gamma fading are present at the inputs to the microdiversity SC receivers. For this system model, analytical expression for the probability density of the signal at the output of the macrodiversity
receiver SC, and the output capacity of the macrodiversity SC receiver are calculated. The obtained results are graphically presented to show the impact of Rician κ factor, the shading severity of
the channel c, the number of clusters µ and correlation coefficient ρ on the probability density of the signal at the output of the macrodiversity system and channel capacity at the output of the
macrodiversity system. Based on the obtained results it is possible to analyze the real behavior of the macrodiversity system in the presence of κ-μ fading.
Joint probability density, channel capacity, macrodiversity SC receiver, correlation coefficient, Rician κ factor
G. L. Stüber, Principles of mobile communication, 2nd ed. New York: Kluwer Academic Publishers, 2002.
S. Panic, M. Stefanovic, J. Anastasov, and P. Spalevic, Fading and Interference Mitigation in Wireless Communications, 1st ed. Boca Raton, FL, USA: CRC Press, Inc., 2013.
M. K. Simon and M.-S. Alouini, Digital Communication over Fading Channels, 2nd ed. New York: John Wiley & Sons, Inc., 2005.
S. R. Panić, D. M. Stefanović, I. M. Petrović, M. Č. Stefanović, J. A. Anastassov, and D. S. Krstić, “Second order statistics of selection macro-diversity system operating over gamma shadowed κ-μ
fading channels,” EURASIP J. Wirel. Commun. Netw., vol. 2011, no. 151, pp. 1–7, 2011.
M. D. Yacoub, “The κ-μ distribution and the η-μ distribution,” IEEE Antennas Propag. Mag., vol. 49, no. 1, pp. 68–81, 2007.
N. Djordjević, B. S. Jakšić, A. Matović, M. Matović, and M. Smilić, “Moments of microdiversity egc receivers and macrodiversity sc receiver output signal over gamma shadowed nakagami-mmultipath
fading channel,” J. Electr. Eng., vol. 66, no. 6, pp. 348–351, 2015.
A. V. Marković, Z. H. Perić, D. B. Đošić, M. M. Smilić, and B. S. Jakšić, “Level Crossing Rate of Macrodiversity System Over Composite Gamma Shadowed Alpha-Kappa-Mu Multipath Fading Channel,” Facta
Universitatis, Ser. Autom. Control Robot., vol. 14, no. 2, pp. 99–109, 2015.
D. Krstic, V. Doljak, M. Stefanovic, and B. Jaksic, “Second order statistics of macrodiversity SC receiver output signal over Gamma shadowed K-μ multipath fading channel,” in Proceedings of the 2016
International Conference on Broadband Communications for Next Generation Networks and Multimedia Applications (CoBCom), 2016, pp. 1–6.
J. Proakis, Digital Communications, 4th ed. New York: McGraw-Hill, 2001.
P. M. Shankar, “Analysis of microdiversity and dual channel macrodiversity in shadowed fading channels using a compound fading model,” AEU - Int. J. Electron. Commun., vol. 62, no. 6, pp. 445–449,
Jun. 2008.
P. C. Spalevic, B. S. Jaksic, B. P. Prlincevic, I. Dinic, and M. M. Smilic, “Signal Moments at the Output from the Macrodiversity System with Three MRC Micro Diversity Receivers in the Presence of k
- μ F ading,” in Proceedings of IEEE conference TELSIKS 2015, 2015, pp. 271–274.
“Wolfram Functions Site.” [Online]. Available: http://functions.wolfram.com. [Accessed: 10-Jun-2016].
J. Li, A. Bose, and Y. Q. Zhao, “Rayleigh flat fading channels’ capacity,” in Proceedings of the 3rd Annual Communication Networks and Services Research Conference, 2005, vol. 2005, pp. 214–217.
P. Varzakas, “Average channel capacity for Rayleigh fading spread spectrum MIMO systems,” Int. J. Commun. Syst., vol. 19, no. 10, pp. 1081–1087, 2006.
W. Hu, L. Wang, G. Cai, and G. Chen, “Non-Coherent Capacity of M -ary DCSK Modulation System over Multipath Rayleigh Fading Channels,” IEEE Access, vol. 5, no. 1, pp. 956–966, 2017.
P. Yang, Y. Wu, and H. Yang, “Capacity of Nakagami- $m$ Fading Channel With BPSK/QPSK Modulations,” IEEE Commun. Lett., vol. 21, no. 3, pp. 564–567, 2017.
J. M. Romero-Jerez and F. J. Lopez-Martinez, “Fundamental capacity limits of spectrum-sharing in Hoyt (Nakagami-q) fading channels,” in IEEE Vehicular Technology Conference, 2017.
D. B. Djosic, D. M. Stefanovic, and C. M. Stefanovic, “Level Crossing Rate of Macro-diversity System with Two Micro-diversity SC Receivers over Correlated Gamma Shadowed α–µ Multipath Fading
Channels,” IETE J. Res., vol. 62, no. 2, pp. 140–145, 2016.
S. R. Panić, D. M. Stefanović, I. M. Petrović, M. Č. Stefanović, J. A. Anastasov, and D. S. Krstić, “Second-order statistics of selection macro-diversity system operating over Gamma shadowed $κ$-$μ$
fading channels,” EURASIP J. Wirel. Commun. Netw., vol. 2011, no. 1, p. 151, Oct. 2011.
P. S. Bithas and A. A. Rontogiannis, “Mobile Communication Systems in the Presence of Fading/Shadowing, Noise and Interference,” pp. 1–14, 2014.
M. Stefanović, S. R. Panić, N. Simić, P. Spalević, and Č. Stefanović, “On the macrodiversity reception in the correlated gamma shadowed Nakagami-M fading,” Teh. Vjesn., vol. 21, no. 3, pp. 511–515,
B. Jaksic, M. Stefanovic, D. Aleksic, D. Radenkovic, and S. Minic, “First-Order Statistical Characteristics of Macrodiversity System with Three Microdiversity MRC Receivers in the Presence of κ-μ
Short-Term Fading and Gamma Lon,” J. Electr. Comput. Eng., vol. 2016, pp. 1–9, 2016.
I. S. Gradshteyn and I. M. Ryzhik, Table of Integrals, Series, and Products, 5th ed. Sad Diego: San Diego, Academic Press.
M. S. Alouini and A. J. Goldsmith, “Capacity of Rayleigh fading channels under different adaptive transmission and diversity-combining techniques,” IEEE Trans. Veh. Technol., vol. 48, no. 4, pp.
1165–1181, 1999.
N. Y. Ermolova, “Capacity Analysis of Two-Wave with Diffuse Power Fading Channels Using a Mixture of Gamma Distributions,” IEEE Commun. Lett., vol. 20, no. 11, pp. 2245–2248, 2016.
B. S. Jakšić, “Level Crossing Rate of Macrodiversity SC Receiver with two Microdiversity SC Receivers over Gamma Shadowed Multipath Fading Channel,” Facta Universitatis, Ser. Autom. Control Robot.,
vol. 14, no. 2, pp. 87–98, Mar. 2015.
A. P. Prudnikov and J. A. Brychkov, Integrasl and series, 2nd ed. Moscow: Moscow, Fizmatlit, 2003.
• There are currently no refbacks.
ISSN: 0353-3670 (Print)
ISSN: 2217-5997 (Online)
COBISS.SR-ID 12826626 | {"url":"https://casopisi.junis.ni.ac.rs/index.php/FUElectEnerg/article/view/3318","timestamp":"2024-11-04T02:44:39Z","content_type":"application/xhtml+xml","content_length":"25882","record_id":"<urn:uuid:14f5c2fb-5a76-4fec-a3ea-2d58fda32adf>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00204.warc.gz"} |
Discourse Analysis: A Catalyst for Reflective Inquiry in Mathematics Classrooms
This project is examining the nature of mathematical discourse in middle school mathematics classrooms; the ways in which middle school mathematics teachers’ beliefs impact the discourse when working
to enact reform-oriented instruction; and how this information can be used to incorporate practitioner research using concepts and tools of discourse analysis to improve mathematics instruction. The
educational goal is to design a long-term professional development program that will continue beyond funding with other cohorts of teachers. | {"url":"https://cadrek12.org/projects/discourse-analysis-catalyst-reflective-inquiry-mathematics-classrooms-0","timestamp":"2024-11-04T21:49:19Z","content_type":"text/html","content_length":"40535","record_id":"<urn:uuid:b3e8a4ff-c9a2-4536-856a-30c963da8e0d>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00008.warc.gz"} |
Book on math fundamentals
Registered: 2016-12-12
Posts: 19
Book on math fundamentals
Over the last year I have picked up a handful of math books, looking to better my understanding of math fundamentals before going further. One of those books which I have found to be excellent in
clarifying concepts of fundamental topics is, Yes, But Why? Teaching for understanding in mathematics, Ed Southall.
This book is obviously aimed at teachers, but as an independent learner I have found it to be an excellent resource for getting a solid birdseye view of various topics and how things work. I only
wish that it existed long ago (it was released earlier this year), as I have wasted alot of time over the years digging through various resources, often running in circles and hitting walls or
becoming bored to death. The book begins at arithmetic and graduates through topics in algebra, geometry, and ends with fundamentals in trigonometry.
What I like about it:
- It gets right to the point of concepts without burrying the reader in a bunch of fluff or too much detail from the getgo.
- It doesn't focus on procedures over concepts.
- It makes good connections from one topic to the next, not leaving gaping holes in understanding.
- Reading it has been a series of 'Ah, ha' moments without the brickwalls of so many other resources.
- It makes good use of graphics for explaining concepts; doesn't include graphics for the sake of dressing things up.
- It isn't stodgy in style; it is very conversational in style. It is an educational and fun read at the same time.
- It makes many mentions of where math terms come from and how they relate to the topic at hand, along with other historical bits.
- It is (along with many articles at MathisFun) what I feel in many ways, how math should be taught.
I am not associated with this book in any way. Just sharing a resource that I have found to be exceptional among math books as an adult self-learner relearning math fundamentals. This book, along
with MathisFun, Geogebra, and exploring using a compass and rule (because working with my hands is often more satisfying) have been time well spent. I am finding myself to be a visual learner, where
numbers often are only added precision in application of concepts.
Last edited by numquester (2017-11-06 07:04:27)
From: Indonesia
Registered: 2015-12-02
Posts: 2,000
Re: Book on math fundamentals
numquester wrote:
- It gets right to the point of concepts without burrying the reader in a bunch of fluff or too much detail from the getgo.
- It doesn't focus on procedures over concepts.
The weakness of many math books (including the ones I wrote).
Actually I never watch Star Wars and not interested in it anyway, but I choose a Yoda card as my avatar in honor of our great friend bobbym who has passed away.
May his adventurous soul rest in peace at heaven.
Registered: 2022-01-30
Posts: 3
Re: Book on math fundamentals
Adding a free and vast STEM resource here. The Bartleby concept explainers page has free explainer for math and calculus concepts as well | {"url":"https://mathisfunforum.com/viewtopic.php?id=24115","timestamp":"2024-11-03T07:30:59Z","content_type":"application/xhtml+xml","content_length":"11440","record_id":"<urn:uuid:939f4739-9d47-43d2-b18d-02ad47c7d69a>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00552.warc.gz"} |
Biot Savart Law in Aerodynamics | Applications, Analysis & Insights
Biot savart law in aerodynamics
Explore the pivotal role of the Biot-Savart Law in aerodynamics for aircraft design, vortex analysis, and sustainable aviation technology.
Biot-Savart Law in Aerodynamics: Understanding the Fundamentals
The Biot-Savart Law, a cornerstone of electromagnetism, finds intriguing applications in the field of aerodynamics, particularly in the analysis and design of aircraft and propulsion systems. This
law, traditionally used to calculate magnetic fields produced by electric currents, offers a unique perspective in aerodynamics when used to analyze the induced velocity fields in the wake of an
aircraft or in the vicinity of propellers and rotors.
Application of Biot-Savart Law in Aerodynamics
In aerodynamics, the Biot-Savart Law is instrumental in understanding the behavior of vortices – lines or tubes of circulating fluid which play a crucial role in the generation of lift on a wing.
According to the law, the velocity induced at a point in space by a small segment of vortex is inversely proportional to the distance from the segment and proportional to the strength of the vortex.
This principle helps in calculating the velocity field around an aircraft, which is essential for predicting lift, drag, and other aerodynamic forces.
Analysis of Vortex Dynamics
The intricate patterns of airflow around wings, propellers, and rotors can be analyzed using the Biot-Savart Law. This analysis is particularly important in understanding wingtip vortices, which are
significant contributors to induced drag. By applying the law, engineers can devise wing designs that minimize these vortices, enhancing the efficiency of the aircraft. Furthermore, in the realm of
rotorcraft aerodynamics, the Biot-Savart Law aids in understanding the complex interactions between rotor blades and the vortices they generate, which is crucial for optimizing performance and
reducing noise.
Insights into Aerodynamic Efficiency
By leveraging the Biot-Savart Law, aerodynamicists gain valuable insights into the flow behavior around various aircraft components. This understanding is pivotal in designing more efficient and
environmentally friendly aircraft. In the era of increasing environmental concerns, such insights contribute significantly to the development of sustainable aviation technologies.
Moreover, the application of the Biot-Savart Law extends beyond traditional fixed-wing aircraft to innovative designs like blended-wing bodies and unmanned aerial vehicles (UAVs). These applications
underscore the versatility and enduring relevance of the law in modern aerodynamics.
In summary, the Biot-Savart Law serves as a powerful analytical tool in aerodynamics, enabling a deeper understanding of vortex dynamics and contributing to the design of more efficient aircraft. Its
applications in aerodynamic analysis not only enhance aircraft performance but also pave the way for innovative and sustainable aviation technologies.
Advanced Applications and Computational Methods
The application of the Biot-Savart Law in aerodynamics extends to advanced computational methods for analyzing and predicting airflow patterns. Computational Fluid Dynamics (CFD), a pivotal tool in
aerodynamic design, often incorporates principles derived from the Biot-Savart Law to simulate the behavior of vortices and their impact on aircraft performance. These simulations allow for the
optimization of designs in a virtual environment, significantly reducing the need for costly and time-consuming physical wind tunnel testing.
Impact on Propulsion Systems and Noise Reduction
Another significant application of the Biot-Savart Law is in the design and analysis of propulsion systems, such as jet engines and propellers. Understanding the vortex structures in the wake of
these systems is crucial for enhancing thrust efficiency and reducing acoustic footprints. This aspect is particularly important in urban areas, where noise pollution from aircraft is a growing
concern. By optimizing the design of propulsion systems using insights gained from the Biot-Savart Law, engineers can develop quieter and more efficient engines, contributing to the global effort to
mitigate noise pollution.
Challenges and Future Directions
While the Biot-Savart Law provides a robust framework for understanding aerodynamic phenomena, its application in complex real-world scenarios presents challenges. The intricate nature of vortex
interactions in turbulent flows requires sophisticated computational models and high-fidelity simulations. Future research in this area is expected to focus on improving these models and developing
more accurate and efficient simulation techniques.
The Biot-Savart Law, a principle originally formulated to describe magnetic fields, has found a unique and invaluable application in the field of aerodynamics. Its role in understanding and analyzing
vortex dynamics has been fundamental in advancing aircraft design, propulsion system efficiency, and noise reduction strategies. The integration of this law into computational fluid dynamics has
revolutionized the way aerodynamicists approach design and optimization challenges, paving the way for more efficient and environmentally friendly aviation technologies.
As the aviation industry continues to evolve, the importance of the Biot-Savart Law in aerodynamic analysis and design is only set to increase. Its applications in tackling modern challenges such as
noise pollution, fuel efficiency, and the development of novel aircraft designs underscore its enduring relevance. In the quest for sustainable aviation, the insights provided by the Biot-Savart Law
will undoubtedly play a critical role in shaping the future of air travel and aircraft design. | {"url":"https://modern-physics.org/biot-savart-law-in-aerodynamics/","timestamp":"2024-11-13T11:32:31Z","content_type":"text/html","content_length":"161449","record_id":"<urn:uuid:ef35cd46-a7c7-49f2-909a-4c30f3704f91>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00570.warc.gz"} |
[Solved] A joker's cap is in the form of a right circular cone ... | Filo
A joker's cap is in the form of a right circular cone of base radius and height . Find the area of the sheet required to make such caps.
Not the question you're searching for?
+ Ask your question
cm, cm
CSA of conical cap
CSA of cones cm
Was this solution helpful?
Video solutions (4)
Learn from their 1-to-1 discussion with Filo tutors.
3 mins
Uploaded on: 1/5/2024
Was this solution helpful?
11 mins
Uploaded on: 9/18/2023
Was this solution helpful?
Found 2 tutors discussing this question
Discuss this question LIVE for FREE
6 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice more questions from Mathematics (NCERT)
Practice questions from Mathematics (NCERT)
View more
Practice more questions from Surface Areas and Volumes
Practice questions on similar concepts asked by Filo students
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text A joker's cap is in the form of a right circular cone of base radius and height . Find the area of the sheet required to make such caps.
Updated On Jan 5, 2024
Topic Surface Areas and Volumes
Subject Mathematics
Class Class 9
Answer Type Text solution:1 Video solution: 4
Upvotes 341
Avg. Video Duration 5 min | {"url":"https://askfilo.com/math-question-answers/a-joker-s-cap-is-in-the-form-of-a-right-circular-clkn","timestamp":"2024-11-05T16:58:59Z","content_type":"text/html","content_length":"378679","record_id":"<urn:uuid:62b47304-98a4-48f8-9814-8af7967d501f>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00051.warc.gz"} |
Why Nobody Is Talking About Best Calculus Textbook
Machine and Deep learning are among the hottest fields in the last few years. In reality, all you will need is a computer with internet access, a headset, and a cozy place to sit down. Computer
science is awesome! Chemistry is among my strongest subjects. Mathematics cannot be learned properly without doing a huge number of issues. When it has to do with mathematics, lessons employing the
exact textbook as in the conventional classroom are incredibly important. Hence Calculus has an important part in our everyday lives.
An author's story should be well thought out and planned, much enjoy an intricate mathematical issue. You will likely write 0 books and total hundreds of computer science issues. Your textbook will
definitely supply you with a good deal of examples. Somethings you simply don't learn in a textbook as there's no substitute then the true thing. You may purchase chemistry textbooks or locate other
textbooks by subject and buy or offer them on the market. Don't lose out on the essential topics As such, the whole syllabus is critical. Solving previous year papers is the most essential and
necessary action to do if you wish to excel in JEE Main exam.
Ideas, Formulas and Shortcuts for Best Calculus Textbook
When schools start to interview they will need to decrease the invited people to a manageable number. Hence the teacher relinquishes a lot of authority and becomes a facilitator. Therefore, teachers
have to offer prior attention to the qualities of the students. With the above-mentioned issues, the teacher cannot expect much from the students concerning the application of concepts.
Students should comprehend the fundamentals of Calculus and get Calculus help well to be able to be in a position to work Calculus difficulties. The students discover that it's tricky to work with
because of insufficient wisdom and practice. They can also get help with Calculus homework questions. Finally, students learning math could not see the essence of it.
Students find it rather hard, as one might anticipate. Students should be provided an opportunity to learn in an environment where there's no teacher dominance. All that is required on the section of
the student is an online connection. Students may work any moment at their own speed. They work at their own pace within a reasonable time frame. A student who's skillful realizes that problem occurs
with the alternate method, and they don't study that technique. Often times many different students have precisely the same questions that you do.
There's simply no point in stumbling about once you know there are those who can point you in the ideal directions. For most people, only the concept of knowing they coulddo something is sufficient
to make them happy. Or you simply did not receive the idea the very first time round. There's one more very important idea associated with fields. The facts might explain that which we see. Also, it
has a number good and well chosen examples in every single section, something I feel is vital.
The Basics of Best Calculus Textbook
As luck would have it, each chapter includes an extremely very good set of references. It contains a detailed bibliography for additional reading, which is one of the most interesting aspects of the
book-the author comments on other works and how they have influenced his presentation. You compose the very first chapter, refer to your high-level outline and incorporate the numerous variables you
defined at the start of the approach.
Introduction to Integral Calculus is a superb book for upper-undergraduate calculus courses and is likewise a perfect reference for students and professionals who want to gain a more comprehension of
using calculus to address problems in a simplified manner. In truth, it might make for even more exciting and productive course for those students. As a caring parent, you want to make certain the
math tutoring site gives you one-on-one instruction.
While knowledge is intended to be assumed at a particular level, I feel as if there's a better approach to ease into it. You ought to have a fair comprehension of Kinematics and Dynamics and it is
necessary to be in a position to create a strong problem solving ability. Closely about the notion of a field is the notion of a subfield.
My favored subject throughout my studies in school was mathematics that is a powerful and very intriguing path of study. From time to time, you might discover that it's simpler to ask a person to
point the way... However, don't forget to be quite precise with your question, or you may be pointed to the incorrect direction! NEVER work a problem that you are unable to check! Work the problems
you recognize first. Often there are lots of means of working the exact issue. The very first and most serious issue with Taubes' book is it is not really a textbook whatsoever, it is a set of
lecture notes. Then the class changes again, as it's possible to learn more complicated methods of integrating.
what is | {"url":"https://www.bengislife.com/2018/10/why-nobody-is-talking-about-best.html","timestamp":"2024-11-06T08:01:15Z","content_type":"text/html","content_length":"87478","record_id":"<urn:uuid:a70cf3c5-2d24-433f-88ca-dabf2316b293>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00819.warc.gz"} |
What our customers say...
Thousands of users are using our software to conquer their algebra homework. Here are some of their experiences:
I'm not much of a math wiz but the Algebrator helps me out with fractions and other stuff I need to know for my math appreciation class in college.
P.W., Illinois
I consider this software as replacement of human algebra tutor. That too, at a very affordable price.
Alden Lewis, WI
I want to thank you for all you help. Your spport in resolving how do a problem has helped me understand how to do the problems, and actually get the right result. Thanks So Much.
Maria Chavez, TX
This version of algebra help is awesome! Better interface, better helps and easier to work with. In fact, I have improved my algebra from failing to pass since I started using it.
Margret Dixx, AL
Search phrases used on 2008-06-27:
Students struggling with all kinds of algebra problems find out that our software is a life-saver. Here are the search phrases that today's searchers used to find our site. Can you find yours among
• mcdougal littell algebra 2 chapter tests
• elementary math formula for combinations
• solutions for basic permutation and combination problems
• Principles of Mathematical Analysis rudin solutions ebook
• solving square root complex algebraic equation
• complex numbers simultaneous equation solver
• simply radical calculator
• grade nine questions
• help with algerbra high school
• math worksheets for 7th graders on rational numbers
• math trivia
• hyperbola answers
• simplify root
• mathamatics
• 7th grade algebra 1 2 step equations
• pre-algebra equations
• simple steps to be genius in algebra problems sums
• Pre algebra for 6th graders
• why do we use radical notation and when
• integers worksheets middle school
• do my algebra
• algebra printouts
• mathamatics games
• algebra 1 answers
• iterative online calculator
• year 8 algebra quiz
• 5th grade math trivia questions
• free math test for 9th graders
• free online t-89 calculator
• online exponential notation calculator
• "SOLVING COLLEGE MATH PROBLEMS"
• square root calculator rationalized
• glencoe algebra 1 f
• implicit differentiation calculator
• parabola, stretched
• accounting manual download
• GRE binomial function
• Probability, Combination and Permutation Analysis
• finding Least Common Denominator inspection method
• larson algebra ebooks
• gcse science practice exam print-out worksheets
• 5th grade math problems
• adding and subtracting negative numbers word problem
• free e-books on CAT exam
• RULES FOR GRAPHING A LINEAR EQUATION
• how to divide radical expressions
• Math trivias
• quadratic fractions
• is synthetic division used to calculate the divisor in a rational expression
• free algebra worksheets for 8th graders
• Grade Nine Math Worksheets
• mcdougal geometry worksheet
• simple algebraic equations + sample
• calculators on the math CLEP
• linear equation worksheet
• +multiply percents worksheets for 7th grade
• cost accounting ebook download
• solving math problems with +decimels
• sixth grade mark wooksheets
• free printable first grade math puzzles
• beginners algebra warm up activity
• trapezoidal method of integrating velocity profile
• adding subtraction integers rules
• using square root method
• video algebra 2 with trigonometry prentice hall
• algebra, beginner
• simultaneous pde in excel
• math practice for children entering six grade
• activities for multiplying decimal
• easy algebra explanations
• sample lesson plan in square root property
• polynomials solver for factoring
• how to solve aptitude questions
• sample math aptitude faqs for practise
• trigonometry work sheets
• basic algerbra
• free accounts books
• substitution method examples
• " maths yr 9 trigonometry worksheets"
• 6th grade permutations
• square roots and exponents
• hARDEST MATH EQUATION
• algebra 1A in texas
• math work books containing combinations and permutations
• algebra 2 vertices
• free tests maths yr 8
• basics of maths for dummies
• 10th grade SOL Math WorkSheets
• software for helping kids with TAKS test
• system of equations ti-83
• TI 83 plus, how to solve for x
• summation notation solver
• Algebra Solver
• lial hornsby mcginnis introductory algebra eighth edition final exam
• mathematical induction calculator
• parabola graph calculator | {"url":"https://softmath.com/algebra-help/formulae-sheet-algebra-axis.html","timestamp":"2024-11-12T06:20:58Z","content_type":"text/html","content_length":"35352","record_id":"<urn:uuid:506c3403-ffc0-435d-b95a-8c58ada2e62a>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00170.warc.gz"} |
Ratio, Proportion and Percentages Puzzle – Challenge 26
This is a great math challenge and it is a perfect educational activity for practicing children's math skills.
The sale price of a computer with \(20\%\) discount is $20 cheaper than its price with \(15\%\) discount. What is the original price of the computer?
The Absolute Best Book to Challenge Your Smart Student!
Original price was: $16.99.Current price is: $11.99.
The correct answer is 400.
Let’s choose x for the original price of the computer. Then,
0.80 x = 0.85 x – 20 → 0.05x = 20 → x = 400
The original price of the computer is $400
The Absolute Best Books to Ace Algebra
Related to This Article
What people say about "Ratio, Proportion and Percentages Puzzle – Challenge 26 - Effortless Math: We Help Students Learn to LOVE Mathematics"?
No one replied yet. | {"url":"https://www.effortlessmath.com/math-puzzles/ratio-proportion-and-percentages-puzzle-challenge-26/","timestamp":"2024-11-11T03:56:48Z","content_type":"text/html","content_length":"86358","record_id":"<urn:uuid:268d0bb4-db12-4d88-85f6-d5aac1eab835>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00110.warc.gz"} |
How to Check For an Empty Intersection Of Lists With Haskell?
To check for an empty intersection of lists in Haskell, you can make use of the built-in intersect function from the Data.List module. The intersect function takes two lists as arguments and returns
a new list that contains only the common elements between the two lists.
To determine if the intersection is empty, you can check the length of the resulting list. If the length is zero, it means that there are no common elements, indicating an empty intersection. Here is
an example of how you can do this:
1 import Data.List (intersect)
3 hasEmptyIntersection :: Eq a => [a] -> [a] -> Bool
4 hasEmptyIntersection list1 list2 = null (intersect list1 list2)
6 main :: IO ()
7 main = do
8 let list1 = [1, 2, 3, 4]
9 list2 = [5, 6, 7, 8]
10 list3 = [3, 4, 5, 6]
12 print (hasEmptyIntersection list1 list2) -- Output: True
13 print (hasEmptyIntersection list1 list3) -- Output: False
In the above example, the hasEmptyIntersection function takes two lists as arguments. It first uses the intersect function to find the common elements between the two lists. Then, it checks if the
resulting list is empty using the null function. If it is empty, the function returns True, indicating an empty intersection. Otherwise, it returns False.
The main function demonstrates how to use the hasEmptyIntersection function with different lists. In this case, the first example has an empty intersection between list1 and list2, so it prints True.
The second example has a non-empty intersection between list1 and list3, so it prints False.
How to find the product of elements in a list in Haskell?
To find the product of elements in a list in Haskell, you can use the product function. Here's an example:
1 main = do
2 let myList = [1, 2, 3, 4, 5]
3 let productResult = product myList
4 putStrLn $ "Product of elements in the list: " ++ show productResult
In this example, we define a list myList containing integers, and then use the product function to calculate the product of all the elements in the list. Finally, we print the result using putStrLn.
When you run this code, it will output:
1 Product of elements in the list: 120
The product function takes a list of numbers as an argument and returns their product.
How to convert a list of strings into a single string in Haskell?
To convert a list of strings into a single string in Haskell, you can use the concat function from the Prelude and apply it to the list of strings. Here is an example:
1 concatStrings :: [String] -> String
2 concatStrings strings = concat strings
You can also use the intercalate function from the Data.List module, which is a more efficient way to join strings together by inserting a separator between each element:
1 import Data.List (intercalate)
3 concatStrings :: [String] -> String
4 concatStrings strings = intercalate "" strings
Both functions take a list of strings as input and concatenate them into a single string. The concat function simply concatenates all the strings together, while the intercalate function concatenates
the strings with a specified separator (in this case, an empty string "").
How to reverse a list in Haskell?
There are multiple ways to reverse a list in Haskell. Here are a few examples:
1. Using the built-in reverse function:
1 reverseList :: [a] -> [a]
2 reverseList = reverse
1. Using pattern matching and recursion:
1 reverseList :: [a] -> [a]
2 reverseList [] = []
3 reverseList (x:xs) = reverseList xs ++ [x]
1. Using a fold:
1 reverseList :: [a] -> [a]
2 reverseList = foldl (flip (:)) []
All of these functions take a list as input and return a new list with the elements in reverse order.
How to find the maximum element in a list in Haskell?
There are several ways to find the maximum element in a list in Haskell. Here are three common approaches:
1. Using the maximum function from the Prelude module:
1 import Data.List (maximum)
3 myList :: [Int]
4 myList = [1, 5, 3, 9, 2]
6 maxElement :: Int
7 maxElement = maximum myList
9 main :: IO ()
10 main = print maxElement
1. Using pattern matching and recursion:
1 myList :: [Int]
2 myList = [1, 5, 3, 9, 2]
4 maxElement :: [Int] -> Int
5 maxElement [x] = x
6 maxElement (x:xs) = max x (maxElement xs)
8 main :: IO ()
9 main = print (maxElement myList)
1. Using a fold function:
1 myList :: [Int]
2 myList = [1, 5, 3, 9, 2]
4 maxElement :: [Int] -> Int
5 maxElement = foldr1 max
7 main :: IO ()
8 main = print (maxElement myList)
Note that these approaches assume that the list is not empty. You may need to add additional code to handle empty lists if necessary.
How to remove an element from a list in Haskell?
There are several ways to remove an element from a list in Haskell. Here are a few approaches:
1. Using List Comprehension: removeElement :: Eq a => a -> [a] -> [a] removeElement x xs = [y | y <- xs, y /= x]
2. Using the filter function: removeElement :: Eq a => a -> [a] -> [a] removeElement x xs = filter (/= x) xs
3. Using pattern matching: removeElement :: Eq a => a -> [a] -> [a] removeElement _ [] = [] removeElement x (y:ys) | x == y = removeElement x ys | otherwise = y : removeElement x ys
4. Using the delete function from the Data.List module: import Data.List (delete) removeElement :: Eq a => a -> [a] -> [a] removeElement x xs = delete x xs
These approaches provide different ways to achieve the same goal of removing an element from a list. Choose the one that best suits your needs and preferences. | {"url":"https://ubuntuask.com/blog/how-to-check-for-an-empty-intersection-of-lists","timestamp":"2024-11-09T06:36:16Z","content_type":"text/html","content_length":"361094","record_id":"<urn:uuid:8a6f0fc8-4b00-4a7b-bf1d-bb7ad6b6489f>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00738.warc.gz"} |
confusion: Given actual and predicted group assignments, give the... in DAAG: Data Analysis and Graphics Data and Functions
Given actual and predicted group assignments, give the confusion matrix
confusion(actual, predicted, gpnames = NULL, rowcol=c("actual", "predicted"), printit = c("overall","confusion"), prior = NULL, digits=3)
actual Actual (prior) group assigments
predicted Predicted group assigments.
gpnames Names for groups, if different from levels(actual)
rowcol For predicted categories to appear as rows, specify rowcol="predicted"
printit Character vector. Print "overall", or "confusion" matrix, or both.
prior Prior probabilities for groups, if different from the relative group frequencies
digits Number of decimal digits to display in printed output
Prior probabilities for groups, if different from the relative group frequencies
Predicted group assignments should be estimated from cross-validation or from bootstrap out-of-bag data. Better still, work with assignments for test data that are completely separate from the data
used to dervive the model.
A list with elements overall (overall accuracy), confusion (confusion matrix) and prior (prior used for calculation of overall accuracy)
Maindonald and Braun: 'Data Analysis and Graphics Using R', 3rd edition 2010, Section 12.2.2
library(MASS) library(DAAG) cl <- lda(species ~ length+breadth, data=cuckoos, CV=TRUE)$class confusion(cl, cuckoos$species) ## The function is currently defined as function (actual, predicted,
gpnames = NULL, rowcol = c("actual", "predicted"), printit = c("overall","confusion"), prior = NULL, digits = 3) { if (is.null(gpnames)) gpnames <- levels(actual) if (is.logical(printit)){ if
(printit)printit <- c("overall","confusion") else printit <- "" } tab <- table(actual, predicted) acctab <- t(apply(tab, 1, function(x) x/sum(x))) dimnames(acctab) <- list(Actual = gpnames,
`Predicted (cv)` = gpnames) if (is.null(prior)) { relnum <- table(actual) prior <- relnum/sum(relnum) acc <- sum(tab[row(tab) == col(tab)])/sum(tab) } else { acc <- sum(prior * diag(acctab)) } names
(prior) <- gpnames if ("overall"%in%printit) { cat("Overall accuracy =", round(acc, digits), "\n") if(is.null(prior)){ cat("This assumes the following prior frequencies:", "\n") print(round(prior,
digits)) } } if ("confusion"%in%printit) { cat("\nConfusion matrix", "\n") print(round(acctab, digits)) } invisible(list(overall=acc, confusion=acctab, prior=prior)) }
For more information on customizing the embed code, read Embedding Snippets. | {"url":"https://rdrr.io/cran/DAAG/man/confusion.html","timestamp":"2024-11-13T08:12:52Z","content_type":"text/html","content_length":"33880","record_id":"<urn:uuid:39750890-b26d-4b17-80cd-51240169e938>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00834.warc.gz"} |
Calculation formulas | Documentation
This document contains information about the statistical formulas used in KensoBI panels and data sources.
Table of Contents
Sample Mean Formula
The sample mean ($\bar{X}$) is a measure of central tendency, representing the average value of a set of sample data.
The formula for the sample mean is given by:
$\bar{X} = \frac{\sum_{i=1}^{n} X_i}{n}$
• $\bar{X}$ is the sample mean,
• $n$ is the number of data points in the sample,
• $X_i$ represents each individual data point.
In this formula, $\sum$ represents the sum, and $\bar{X}$ denotes the mean.
Average Range Formula
The average range ($\bar{R}$) in statistical process control is calculated as the average of the individual ranges. The range is the difference between the maximum and minimum values in a sample.
The formula for average range is given by:
$\bar{R} = \frac{R_1 + R_2 + \ldots + R_k}{k}$
• $\bar{R}$ is the average range,
• $R_1, R_2, \ldots, R_k$ are the individual ranges in each of the $k$ samples, and
• $k$ is the number of samples.
This formula represents the central tendency of the range values and is commonly used in control chart calculations, such as in the context of an R-chart in statistical process control.
Standard Deviation Formulas
The standard deviation ($\sigma$) is a measure of the amount of variation or dispersion in a set of values.
Population Standard Deviation Formula
The formula for the population standard deviation is given by:
$\sigma = \sqrt{\frac{\sum_{i=1}^{N}(X_i - \mu)^2}{N}}$
• $\sigma$ is the population standard deviation,
• $N$ is the number of data points in the population,
• $X_i$ represents each individual data point,
• $\mu$ is the mean of the population.
Sample Standard Deviation Formula
The formula for the sample standard deviation is given by:
$s = \sqrt{\frac{\sum_{i=1}^{n}(X_i - \bar{X})^2}{n-1}}$
• $s$ is the sample standard deviation,
• $n$ is the number of data points in the sample,
• $X_i$ represents each individual data point,
• $\bar{X}$ is the mean of the sample.
In both formulas, $\sqrt{}$ denotes the square root, $\sum$ represents the sum, and $\bar{X}$ denotes the mean.
Sample Range Formula
The sample range ($R$) is a measure of the spread or dispersion of a set of sample data.
The formula for the sample range is given by:
$R = X_{\text{max}} - X_{\text{min}}$
• $R$ is the sample range,
• $X_{\text{max}}$ is the maximum value in the sample,
• $X_{\text{min}}$ is the minimum value in the sample.
In this formula, $X_{\text{max}}$ represents the maximum value, and $X_{\text{min}}$ represents the minimum value in the sample.
Mean of Means Formula
The mean of means ($\bar{\bar{X}}$) is a measure that represents the average of sample means across multiple groups or samples.
The formula for the mean of means is given by:
$\bar{\bar{X}} = \frac{\sum_{i=1}^{k} \bar{X}_i}{k}$
• $\bar{\bar{X}}$ is the mean of means,
• $k$ is the number of groups or samples,
• $\bar{X}_i$ represents the mean of the $i^{th}$ group or sample.
In this formula, $\sum$ represents the sum, and $\bar{\bar{X}}$ denotes the mean of means.
Capability Indices Formulas
Capability indices are statistical measures used to assess the ability of a process to meet specifications.
Cp Formula
The Cp (Process Capability) index is calculated as:
$Cp = \frac{{\text{{USL}} - \text{{LSL}}}}{{6 \times \sigma}}$
Pp Formula
The Pp (Potential Process Capability) index is calculated as:
$Pp = \frac{{\text{(nominal + USL)} - \text{(nominal + LSL)}}}{{6 \times \sigma_x}}$
Cpk Formula
The Cpk (Process Capability Index) index is calculated as:
$Cpk=\mathrm{min}\left(\frac{\text{USL}-\stackrel{ˉ}{\stackrel{ˉ}{x}}}{3×\sigma },\frac{\stackrel{}{\stackrel{ˉ}{x}}}{}$ | {"url":"https://docs.kensobi.com/panels/spc-chart/functions","timestamp":"2024-11-05T16:42:45Z","content_type":"text/html","content_length":"163653","record_id":"<urn:uuid:61506f34-0fd2-459b-89e8-b760a57b5ae8>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00695.warc.gz"} |
Calculate Variance-Covariance Matrix for CLV Models fitted
vcov.clv.fitted {CLVTools} R Documentation
Calculate Variance-Covariance Matrix for CLV Models fitted with Maximum Likelihood Estimation
Returns the variance-covariance matrix of the parameters of the fitted model object. The variance-covariance matrix is derived from the Hessian that results from the optimization procedure. First,
the Moore-Penrose generalized inverse of the Hessian is used to obtain an estimate of the variance-covariance matrix. Next, because some parameters may be transformed for the purpose of restricting
their value during the log-likelihood estimation, the variance estimates are adapted to be comparable to the reported coefficient estimates. If the result is not positive definite, Matrix::nearPD is
used with standard settings to find the nearest positive definite matrix.
If multiple estimation methods were used, the Hessian of the last method is used.
## S3 method for class 'clv.fitted'
vcov(object, ...)
object a fitted clv model object
... Ignored
A matrix of the estimated covariances between the parameters of the model. The row and column names correspond to the parameter names given by the coef method.
See Also
MASS::ginv, Matrix::nearPD
version 0.10.0 | {"url":"https://search.r-project.org/CRAN/refmans/CLVTools/html/vcov.clv.fitted.html","timestamp":"2024-11-07T00:27:53Z","content_type":"text/html","content_length":"3118","record_id":"<urn:uuid:e29c14dc-da8e-4dc8-9d51-804d2521e092>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00480.warc.gz"} |
Interpreting Regression Results
15.4 Interpreting Regression Results
Regression is used frequently to calculate the line of best fit. If you perform a regression analysis, you will generate an analysis report sheet listing the regression results of the model. In this
article, we explain how to interpret the imporant regressin reslts quickly and easily
In Parameters Table
The fitted values are reported in the Parameters table, like what is shown below:
Estimated values for each parameter of the best fit which would make the curve closest to the data points.
Standard Error
The parameter standard errors can give us an idea of the precision of the fitted values. Typically, the magnitude of the standard error values should be lower than the fitted values. If the standard
error values are much greater than the fitted values, the fitting model may be overparameterized.
Is every term in the regression model significant? Or does every predictor contribute to the response? The t-tests for coefficients answer these kinds of questions.
The null hypothesis for a parameter's t-test is that this parameter is equal to zero. So if the null hypothesis is not rejected, the corresponding predictor will be viewed as insignificant, which
means that it has little to do with the response.
The t-test can also be used as a detection tool. For example, in polynomial regression, we can use it to determine the proper order of the polynomial model. We add higher order terms until a t-test
for the newly-added term suggests that it is insignificant.
• t-Value: the test statistic for t-test
t-Value = Fitted value/Standard Error, for example the t-Value for y0 is 5.34198/0.58341 = 9.15655.
For this statistical t-value, it usually compares with a critical t-value of a given confident level $\alpha\,\!$ (usually be 5%). If the t-value is larger than the critical t-value ($|t|>t_
{\frac \alpha 2}$ ), it can be said that there is a significant difference.
However, the Prob>|t| is easier to interpret, and we recommend that you ignore t-Value and judge by Prob>|t|.
• Prob>|t|: The p-value for t-test
If Prob>|t| < $\alpha\,\!$ (usually be 5%), that means we find enough evidence to reject H0 of t-test. The smaller Prob>|t|, the more unlikely the parameter is equal to zero.
LCL and UCL (Parameter Confidence Interval)
UCL and LCL, upper and lower confidence intervals of parameter, indicate how likely the interval is to contain the true value. For example, in the above image of Parameters table, we are 95% sure
that the true value of offset(y0) is between 4.16764 and 6.51631, the true values of center(x0) is between 24.73246 and 25.08134, and the true values of width(w) is between 9.75801 and 10.58138.
The dependency value, which is computed from the variance-covariance matrix, typically indicates the significance of the parameter in your model. For example, if some dependency values are close to
1, this could mean that there is mutual dependency between those parameters. In other words, the function is over-parameterized and the parameter may be redundant. Note that you should include
restrictions to make sure that the fitting model is meaningful, which you can refer to this section.
In Statistics Table
The key linear fit statistics are summarized in the Statistics table, like what is shown below:
Residual Sum of Squares
Residual Sum of Squares is usually abbreviated to RSS. It is actually the sum of the square of the vertical deviations from each data point to the fitting regression line. It can be inferred that
your data is perfect fit if the value of RSS is equal to zero. This statistic can help to decide if the fitted regression line is a good fit for your data. Generally speaking, the smaller the
residual sum of squares, the better your model fits your data.
Scale Error with sqrt(Reduced Chi-Sqr)
The Reduced Chi-square value, which is also called Scale Error with square, is equal to the residual sum of square (RSS) divided by the degree of freedom. Typically a Reduced Chi-square value close
to 1 indicates a good fit result, and it implies that the difference between observed data and fitted data is consistent with the error variance. If the error variance is over-estimated, the Reduced
Chi-square value will be much less than 1. For under-estimated error variance, it will be much greater than 1. Note that it needs to select the correct variance in fitting procedure for Reduced
Chi-square. For example, if the y data is multiplied by a scaling factor simply, the Reduced Chi-square will be scaled as well. Only if you also scale the error variance by a correct factor, the
value of Reduced Chi-square will turn back into a normal value.
Pearson's r
Pearson’s correlation coefficient, denoted by Pearson’s r, can help to measure the strength of linear relationship between paired data. The value of Pearson’s r is constrained between -1 to 1. In
linear regression, a positive value of Pearson’s r indicates that there is positive linear correlation between predictor variable(x) and response variable(y), while a negative value of Pearson’s r
indicates that there is negative linear correlation between predictor variable(x) and response variable(y). The value of zero indicates that there is no linear correlation between data. What’s more,
the closer the value is to -1 or 1, the stronger linear correlation is.
R-Square (COD)
R-square, which is also known as the coefficient of determination (COD), is a statistical measure to qualify the linear regression. It is a percentage of the response variable variation that
explained by the fitted regression line, for example the R-square suggests that the model explains approximately more than 89% of the variability in the response variable. Hence, R-square is always
between 0 and 1. If R-square is 0, it indicates that fitted line explains none of the variability of the response data around its mean; while if R-square is 1, it indicates that the fitted line
explains all the variability of the response data around its mean. In general, the larger the R-square, the better the fitted line fits your data.
Adj. R-Square
R-square can be used to quantify how well a model fits the data, and R-square will always increase when a new predictor is added. It is a misunderstanding that a model with more predictors has a
better fit. The Adj. R-square is a modified version of R-square, which is adjusted for the number of predictor in the fitted line. Thus, it can be used to compare with the fitted lines with different
numbers of predictors. If the number of predictors is greater than 1, Adj.-square is always smaller than R-square.
To learn more about how to judge how good is the fit, see Additional Information of R-square
In ANOVA Table
How the regression equation account for the variability in the ys is answered in ANOVA table, like what is shown below:
F Value
F Value is a ratio of two mean squares,which can be computed by dividing the mean square of fitted model by the mean square of error. For example, the F value of the model above is 65764.4768/
16441.1192=2103.59577. It is a test statistic for a test of whether the fitted model differs significantly from the model y=constant, which is a flat line with slope being equal to zero. It can be
inferred that the more this ratio deviates from 1, the stronger the evidence for the fitted model differing significantly from the model y=constant.
Prob>F is a p-value for F-test, which is a probability with a value ranging from 0 to 1. If the p-value for F-test is less than the significant level $\alpha\,\!$ (usually be 5%), it can be conclude
that the fitted model is significantly different from the model y=constant, which inferred that the fitted model is a nonlinear curve or a linear curve with slope that significantly different from
Notes: In linear regression, if fixing the intercept at a certain value, the p value for F-test is not meaningful, and it is different from that in linear regression without the intercept constraint.
In Covariance and Correlation Table
The relationship between the variables can be obsevered in Covariance and Correlation table, like what is shown below:
The covariance value indicates the correlation between two variables, and the matrices of covariance in regression show the inter-correlations among all parameters. The diagonal values for covariance
matrix is equal to the square of parameter error.
The correlation matrix rescales the covariance values so that their range is from -1 to +1. A value close to +1 means the two parameters are positively correlated, while a value close to -1 indicates
negative correlation. And a zero value indicates that the two parameters are totally independent.
The fitted curve as well as its confidence band, prediction band and ellipse are plotted on the Fitted Curves Plot as below, which can help to interpret the regression model more intuitively.
The residual is defined as:
Residual plots can be used to assess the quality of a regression, which is usually at the end of the report as below:
Topics for Further Reading | {"url":"https://d2mvzyuse3lwjc.cloudfront.net/doc/Origin-Help/Interpret-Regression-Result","timestamp":"2024-11-13T07:58:24Z","content_type":"text/html","content_length":"160429","record_id":"<urn:uuid:ad5faa4a-a828-41df-ba08-39011a42117f>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00203.warc.gz"} |
DAG calculator: How to know the DAG size?
We have already presented the DAG in mining.
As a reminder, the DAG (Directed Acyclic Graphs) is a mathematical and computer construction allowing the creation of distributed systems and networks. The DAG is notably used by the Ravencoin
cryptocurrency which works thanks to the Proof of Work system.
In the mining activity, the size of the DAG is crucial. It tells whether the GPUs used are powerful enough to mine this and that cryptocurrency.
In this article, we will find out how to calculate the DAG size with the help of a DAG calculator that we recommend.
Reminder on the definition of DAG
The DAG is a well-known construction in the mathematical and computer world. The study of this type of structure began in 1878 with the research on graphs by the English mathematician, James Joseph
In cryptomining, the DAG gives the technical possibility to build identical distributed systems, like those found in blockchain technology.
A DAG is a finite directed graph, without directed cycles. Its vertices (which can be compared to blocks in the blockchain) are connected by edges (like the hash) with a specific direction.
Moreover, the entire graph takes us from point A to point B, without the possibility of returning to point A in any case.
A blockchain is composed of information grouped by sets, cryptographically linked and ordered to a previous set. This relationship cannot be broken at any time in the blockchain without immediately
replacing the following blocks. If the blockchain is modified, a fork is generated.
For the DAG, there is a relationship between vertices, a relationship given by the edges. If an edge is modified in the DAG, its relation is rewritten, generating a new DAG, and thus a different
Note that a DAG has an epoch. An epoch is a period of mining. For example, the epoch increases for every 60 000 blocks for Ethereum Classic.
When we talk about DAG, it is important for miners to know the current and future size of a DAG file. Indeed, a DAG file has a major impact on the speed of mining cryptocurrencies.
It all started with the Ethash algorithm, when Ethereum could be mined. The cryptocurrency required a dataset of about 1 GB, so the DAG file size was 1 GB.
However, the DAG file size did not stay at 1 GB and was increasing with each epoch, 30,000 blocks for Ethash. With these years of operation, Ethash increased its DAG file size, which made some GPUs
obsolete for mining Ethereum.
The DAG size is actually the size of the DAG file. It is important to know that not all cryptocurrencies have the same DAG size, as they can be mined at different block heights.
Mining algorithms like Etchash (Ethereum Classic) or KawPow (Ravencoin) use the Proof of Work system which also uses a lot of memory to run.
Let’s go back to our file. The DAG file is located directly in the VRAM of a GPU. If this DAG file is larger than the memory of the GPU, then the GPU will become obsolete for mining cryptocurrencies.
Today, the current DAG file size for Ethereum Classic is about 3 GB. This means that it is possible to mine Ethereum Classic with GPUs with at least 3 GB of VRAM.
The DAG size calendar offered by Minerstat allows you to know, depending on certain cryptocurrencies, the minimum number of VRAM that your GPU needs to mine this and that cryptocurrency. We can know
this according to the number of epochs of each cryptocurrency.
Let’s take an example.
For EthereumPoW, we see that a GPU with a minimum of 5 GB of VRAM is required to mine the EthereumPoW cryptocurrency. We are currently at about 15 million blocks mined in total for this
We also see that by April 2024, we will need a minimum of a 6GB VRAM GPU when we reach the 19,200,000th block, which is at the 640th epoch.
To mine Ravencoin, you will need at least a 4GB VRAM GPU on July 9, 2023:
We can see that it is important to know the file size of the DAG. Without this, mining may be impossible due to outdated hardware.
The DAG is also an important element if you want to dual mine.
Feel free to join us on the Cruxpool Discord to join our mining community and/or go to start mining on Cruxpool! | {"url":"https://cruxpool.com/blog/how-to-know-the-dag-size-dag-calculator/","timestamp":"2024-11-12T19:37:07Z","content_type":"text/html","content_length":"235704","record_id":"<urn:uuid:f4c9ff11-0819-4749-9f34-30a66a916bd1>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00653.warc.gz"} |
Environmental Data Scientist Blog
A brief review of the basic concepts of Biomedical Ethics. The following concepts will be discussed in this part:
A brief review of the basic concepts of Biomedical Ethics. The following concepts will be discussed in this part:
A brief review of the basic concepts of Biomedical Ethics. The following concepts will be discussed in this part:
A brief review of the basic concepts of Biomedical Ethics. The following concepts will be discussed in this part:
A brief review of the basic concepts of Biomedical Ethics. The following concepts will be discussed in this part:
A brief review of the basic concepts of Biomedical Ethics. The following concepts will be discussed in this part:
A brief review of the basic concepts of Biomedical Ethics. The following concepts will be discussed in this part:
A brief review of the basic concepts of Biomedical Ethics. The following concepts will be discussed in this part:
A brief review of the basic concepts of Biomedical Ethics. The following concepts will be discussed in this part:
A brief review of the basic concepts of Biomedical Ethics. The following concepts will be discussed in this part: | {"url":"https://sanikovich.com/","timestamp":"2024-11-10T02:12:11Z","content_type":"text/html","content_length":"48423","record_id":"<urn:uuid:3366cda5-0ea3-457c-8706-fdc3ce3f12bf>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00287.warc.gz"} |
Conditional temporal moment of the time-frequency distribution of a signal
Time-frequency moments provide an efficient way to characterize signals whose frequencies change in time (that is, are nonstationary). Such signals can arise from machinery with degraded or failed
hardware. Classical Fourier analysis cannot capture the time-varying frequency behavior. Time-frequency distribution generated by short-time Fourier transform (STFT) or other time-frequency analysis
techniques can capture the time-varying behavior, but directly treating these distributions as features carries a high computational burden, and potentially introduces unrelated and undesirable
feature characteristics. In contrast, distilling the time-frequency distribution results into low-dimension time-frequency moments provides a method for capturing the essential features of the signal
in a much smaller data package. Using these moments significantly reduces the computational burden for feature extraction and comparison — a key benefit for real-time operation [1], [2].
The Predictive Maintenance Toolbox™ implements the three branches of time-frequency moment:
• Conditional spectral moment — tfsmoment
• Conditional temporal moment — tftmoment
• Joint time-frequency moment — tfmoment
momentT = tftmoment(xt,order) returns the conditional temporal moment of timetable xt as a matrix. The momentT variables provide the temporal moments for the orders you specify in order. The data in
xt can be nonuniformly sampled.
momentT = tftmoment(x,fs,order) returns the conditional temporal moment of time-series vector x, sampled at rate fs. The moment is returned as a matrix, in which each column represents a temporal
moment corresponding to each element in order. With this syntax, x must be uniformly sampled.
momentT = tftmoment(x,ts,order) returns the conditional temporal moment of x sampled at the time instants specified by ts in seconds.
• If ts is a scalar duration, then tftmoment applies it uniformly to all samples.
• If ts is a vector, then tftmoment applies each element to the corresponding sample in x. Use this syntax for nonuniform sampling.
momentT = tftmoment(p,fp,tp,order) returns the conditional temporal moment of a signal whose power spectrogram is p. fp contains the frequencies corresponding to the temporal estimate contained in p.
tp contains the vector of time instants corresponding to the centers of the windowed segments used to compute short-time power spectrum estimates. Use this syntax when:
• You already have the power spectrogram you want to use.
• You want to customize the options for pspectrum, rather than accept the default pspectrum options that tftmoment applies. Use pspectrum first with the options you want, and then use the output p
as input for tftmoment. This approach also allows you to plot the power spectrogram.
momentT = tftmoment(___,Name,Value) specifies additional properties using name-value pair arguments. Options include moment centralization and time-limit specification.
You can use Name,Value with any of the input-argument combinations in previous syntaxes.
[momentT,f] = tftmoment(___) returns the frequency vector f associated with the moment matrix in momentT.
You can use f with any of the input-argument combinations in previous syntaxes.
tftmoment(___) with no output arguments plots the conditional temporal moment. The plot x-axis is frequency, and the plot y-axis is the corresponding temporal moment.
You can use this syntax with any of the input-argument combinations in previous syntaxes.
Plot the Conditional Temporal Moments of a Time Series Vector
Plot the conditional temporal moments of a time series using a plot-only approach and a return-data approach.
Load and plot the data, which consists of simulated vibration measurements for a system with a fault that causes periodic resonances. x is the vector of measurements, and fs is the sampling
load tftmoment_example x fs
xlabel('Time in Seconds')
title('Simulated Vibration Measurements')
Use the function pspectrum with the 'spectrogram' option to show the frequency content versus time.
The spectrogram shows that the first burst is at 100 Hz, and the second burst is at 300 Hz. The 300-Hz burst is stronger than the 100-Hz burst by 70 dB.
Plot the second temporal moment (variance), using the plot-only approach with no output arguments and specifying fs.
order = 2;
tftmoment(x,fs,order);title('Second Temporal Moment')
There are two distinct features in the plot at 100 and 300 Hz corresponding to the induced resonances shown by the spectrogram. The moments are much closer in magnitude than the spectral results
Now find the first four temporal moments, using the timeline ts that you already constructed. This time, use the form that returns both the moment vectors and the associated frequency vectors. Embed
the order array as part of the input argument.
[momentT,f] = tftmoment(x,ts,[1 2 3 4]);
Each column of momentT contains the moment corresponding to one of the input orders.
momentT_1 = momentT(:,1);
momentT_2 = momentT(:,2);
momentT_3 = momentT(:,3);
momentT_4 = momentT(:,4);
Plot the four moments separately to compare the shapes.
title('First Temporal Moment — Mean')
xlabel('Frequency in Hz')
title('Second Temporal Moment — Variance')
xlabel('Frequency in Hz')
title('Third Temporal Moment — Skewness')
xlabel('Frequency in Hz')
title('Fourth Temporal Moment — Kurtosis')
xlabel('Frequency in Hz')
For the data in this example, the second and fourth temporal moments show the clearest features for the faulty resonance.
Use an Existing Power Spectrogram to Compute the Conditional Temporal Moment
By default, tfsmoment calls the function pspectrum internally to generate the power spectrogram that tftmoment uses for the moment computation. You can also import an existing power spectrogram for
tftmoment to use instead. This capability is useful if you already have a power spectrogram as a starting point, or if you want to customize the pspectrum options by generating the spectrogram
explicitly first.
Input a power spectrogram that has already been generated using default options. Compare the resulting temporal-moment plot with one that tftmoment generates using its own pspectrum default options.
The results should be the same.
Load the data, which consists of simulated vibration measurements for a system with a fault that causes periodic resonances. p is the previously computed spectrogram, fp and tp are the frequency and
time vectors associated with p, x is the original vector of measurements, and fs is the sampling frequency.
load tftmoment_example p fp tp x fs
Determine the second temporal moment using the spectrogram and its associated frequency and time vectors. Plot the moment.
[momentT_p,f_p] = tftmoment(p,fp,tp,2);
title('Second Temporal Moment using Input Spectrogram ')
Now find and plot the second temporal moments using the original data and sampling rate.
[momentT,f] = tftmoment(x,fs,2);
title('Second Temporal Moment using Measurement Data')
As expected, the plots match since the default pspectrum options were used for both. This result demonstrates the equivalence between the two approaches when there is no customization.
Find the Conditional Temporal Moments of Data Measurements in a Timetable
Real-world measurements often come packaged as part of a time-stamped table that records actual time and readings rather than relative times. You can use the timetable format for capturing this data.
This example shows how tftmoment operates with a timetable input, in contrast to the data vector inputs used for the other tftmoment examples, such as Plot the Conditional Temporal Moments of a Time
Series Vector.
Load the data, which consists of a single timetable (xt_inner1) containing measurement readings and time information for a piece of machinery. Examine the properties of the timetable.
load tfmoment_tdata.mat xt_inner1;
ans =
TimetableProperties with properties:
Description: ''
UserData: []
DimensionNames: {'Time' 'Variables'}
VariableNames: {'x_inner1'}
VariableTypes: "double"
VariableDescriptions: {}
VariableUnits: {}
VariableContinuity: []
RowTimes: [146484x1 duration]
StartTime: 0 sec
SampleRate: 4.8828e+04
TimeStep: 2.048e-05 sec
Events: []
CustomProperties: No custom properties are set.
Use addprop and rmprop to modify CustomProperties.
This table consists of dimensions Time and the Variables, where the only variable is x_inner1.
Find the second and fourth conditional temporal moments (order = [2 4]) for the data in the timetable.
order = [2 4];
[momentT_xt_inner1,f] = tftmoment(xt_inner1,order);
The temporal moments are represented by the columns of momentT_xt_inner1, just as they would be for a moment taken from a time series vector input.
Plot the moments versus returned frequency vector f.
momentT_inner1_2 = momentT_xt_inner1(:,1);
momentT_inner1_4 = momentT_xt_inner1(:,2);
title("Second Temporal Moment")
title("Fourth Temporal Moment")
xlabel('Frequency in Hz')
Input Arguments
xt — Time-series signal
Time-series signal for which tftmoment returns the moments, specified as a timetable that contains a single variable with a single column. xt must contain increasing, finite row times. If the
timetable has missing or duplicate time points, you can fix it using the tips in Clean Timetable with Missing, Duplicate, or Nonuniform Times. xt can be nonuniformly sampled, with the pspectrum
constraint that the median time interval and the mean time interval must obey:
$\frac{1}{100}<\frac{\text{Median time interval}}{\text{Mean time interval}}<100.$
For an example of timetable input, see Find the Conditional Temporal Moments of Data Measurements in a Timetable
order — Moment orders to return
integer scalar | integer vector
Moment orders to return, specified as one of the following:
• Integer — Compute one moment.
• Vector — Compute multiple moments at once.
Example: momentT = tftmoment(x,2) specifies the second-order temporal moment (variance) of the time-frequency distribution of x.
Example: momentT = tftmoment(x,[1 2 3 4]) specifies the first four moment orders of the time-frequency distribution of x.
You can specify any order and number of orders, but low-order moments carry less computational burden and are better suited to real-time applications. The first four moment orders correspond to the
statistical moments of a data set:
1. Mean ("group delay" for temporal data)
2. Variance
3. Skewness (degree of asymmetry about the mean)
4. Kurtosis (length of outlier tails in the distribution — a normal distribution has a kurtosis of 3)
For examples, see:
x — Time-series signal
Time-series signal from which tftmoment returns the moments, specified as a vector.
For an example of a time-series input, see Plot the Conditional Temporal Moments of a Time Series Vector
fs — Sample rate
positive scalar
Sample rate of x, specified as positive scalar in hertz when x is uniformly sampled.
ts — Sample-time values
duration scalar | vector | duration vector | datetime vector
Sample-time values, specified as one of the following:
• duration scalar — time interval between consecutive samples of X.
• Vector, duration array, or datetime array — time instant or duration corresponding to each element of x.
ts can be nonuniform, with the pspectrum constraint that the median time interval and the mean time interval must obey:
$\frac{1}{100}<\frac{\text{Median time interval}}{\text{Mean time interval}}<100.$
p — Power spectrogram or spectrum of signal
matrix | vector
Power spectrogram or spectrum of a signal, specified as a matrix (spectrogram) or a column vector (spectrum). p contains an estimate of the short-term, time-localized power spectrum of a time-series
signal. If you specify p, then tftmoment uses p rather than generate its own power spectrogram. For an example, see Use a Customized Power Spectrogram to Compute the Conditional Spectral Moment.
fp — Frequencies for p
Frequencies for power spectrogram or spectrum p when p is supplied explicitly to tftmoment, specified as a vector in hertz. The length of fp must be equal to the number of rows in p.
tp — Time information for p
vector | duration vector | datetime vector | duration scalar
Time information for power spectrogram or spectrum p when p is supplied explicitly to tftmoment, specified as one of the following:
• Vector of time points, whose data type can be numeric, duration, or datetime. The length of vector tp must be equal to the number of columns in p.
• duration scalar that represents the time interval in p. The scalar form of tp can be used only when p is a power spectrogram matrix.
• For the special case where p is a column vector (power spectrum), tp can be a numeric, duration, or datetime scalar representing the time point of the spectrum.
Name-Value Arguments
Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but
the order of the pairs does not matter.
Before R2021a, use commas to separate each name and value, and enclose Name in quotes.
Example: 'Centralize',false,'TimeLimits',[20 100] computes the noncentralized conditional temporal moment for the portion of the signal ranging from 20 sec to 100 sec.
Centralize — Centralize-moment option
true (default) | false
Centralize-moment option, specified as the comma-separated pair consisting of 'Centralize' and a logical.
• If Centralize is true, then tftmoment returns the centralized conditional moment by subtracting the conditional mean (which is the first moment) in the computation.
• If Centralize is false, then tftmoment returns the noncentralized moment, preserving any data offset.
Example: momentT = tftmoment(x,2,'Centralize',false).
TimeLimits — Time Limits
full timespan (default) | [t1 t2]
Time limits, specified as the comma-separated pair consisting of 'TimeLimits' and a two-element vector containing lower and upper bounds t1 and t2 in the same units as ts, and of the data types:
• Numeric or duration when fs or a scalar ts are specified, or when ts is a numeric or duration vector
• Numeric, duration, or datetime when ts is specified as a datetime vector
This specification allows you to extract a temporal section of data from a longer data set.
Output Arguments
momentT — Conditional temporal moment
Conditional temporal moment returned as a matrix whose columns represent the temporal moments.
momentT is a matrix with one or more columns, regardless of whether the input data is timetable xt, time-series vector x, or spectrogram data p.
More About
Conditional Temporal Moments
The conditional temporal moments of a nonstationary signal comprise a set of time-varying parameters that characterize the group delay as it evolves in time. They are related to the conditional
spectral moment and the joint time-frequency moments. The conditional spectral moment is an integral function of frequency, given time, and marginal distribution. The conditional temporal moment is
an integral function of time, given frequency, and marginal distribution. The joint time-frequency moment is a double integral that varies both time and frequency [1], [2].
Each moment is associated with a specific order, with the first four orders being the statistical properties of 1) mean, 2) variance, 3) skewness, and 4) kurtosis.
tftmoment computes the conditional temporal moments of the time-frequency distribution for a signal x, for the orders specified in order. The function performs these steps:
1. Compute the spectrogram power spectrum, P(t,f), of the input using the pspectrum function and uses it as a time-frequency distribution. If the syntax used supplies an existing P(t,f), then
tftmoment uses that instead.
2. Estimate the conditional temporal moment ${〈{t}^{n}〉}_{\omega }$ of the signal using, for the non-centralized case:
${〈{t}^{n}〉}_{\omega }=\frac{1}{P\left(\omega \right)}\int {t}^{n}P\left(t,\omega \right)dt,$
where m is the order and P(t) is the marginal distribution.
For the centralized conditional temporal moment ${\mu }_{t}^{n}\left(\omega \right)$, the function uses
${\mu }_{t}^{n}\left(\omega \right)=\frac{1}{P\left(\omega \right)}{\int \left(t-{〈{t}^{1}〉}_{\omega }\right)}^{n}P\left(t,\omega \right)dt.$
[1] Loughlin, P. J. "What are the time-frequency moments of a signal?" Advanced Signal Processing Algorithms, Architectures, and Implementations XI, SPIE Proceedings. Vol. 4474, November 2001.
[2] Loughlin, P., F. Cakrak, and L. Cohen. "Conditional moment analysis of transients with application to helicopter fault data." Mechanical Systems and Signal Processing. Vol 14, Issue 4, 2000, pp.
Extended Capabilities
C/C++ Code Generation
Generate C and C++ code using MATLAB® Coder™.
Version History
Introduced in R2018a | {"url":"https://www.mathworks.com/help/predmaint/ref/tftmoment.html","timestamp":"2024-11-03T20:21:01Z","content_type":"text/html","content_length":"140096","record_id":"<urn:uuid:5ac53ef2-3aed-4c7c-84d1-8cd5beb5b3b2>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00349.warc.gz"} |
class Object
trait Matchable
class Any
Members list
Value members
Inherited methods
Divides each element of this tensor by the corresponding element of other. *
Divides each element of this tensor by the corresponding element of other. *
Inherited from:
Computes the absolute value of each element.
Computes the absolute value of each element.
Inherited from:
Tests if all elements of this tensor evaluate to true.
Tests if all elements of this tensor evaluate to true.
Inherited from:
Tests if any element of this tensor evaluates to true.
Tests if any element of this tensor evaluates to true.
Inherited from:
Returns the indices of the maximum value of all elements in the tensor.
Returns the indices of the maximum value of all elements in the tensor.
This is the second value returned by torch.max(). See its documentation for the exact semantics of this method.
val a = torch.rand(Seq(1, 3))
// tensor dtype=float32, shape=[1] 2
Inherited from:
Computes the gradient of current tensor w.r.t. graph leaves.
Computes the gradient of current tensor w.r.t. graph leaves.
The graph is differentiated using the chain rule. If the tensor is non-scalar (i.e. its data has more than one element) and requires gradient, the function additionally requires specifying gradient.
It should be a tensor of matching type and location, that contains the gradient of the differentiated function w.r.t. self.
This function accumulates gradients in the leaves - you might need to zero .grad attributes or set them to None before calling it. See Default gradient layouts<default-grad-layouts> for details on
the memory layout of accumulated gradients.
If you run any forward ops, create gradient, and/or call backward in a user-specified CUDA stream context, see Stream semantics of backward passes<bwd-cuda-stream-semantics>.
When inputs are provided and a given input is not a leaf, the current implementation will call its grad_fn (though it is not strictly needed to get this gradients). It is an implementation detail on
which the user should not rely. See https://github.com/pytorch/pytorch/pull/60521#issuecomment-867061780 for more details.
Inherited from:
This function is differentiable, so gradients will flow back from the result of this operation to input. To create a tensor without an autograd relationship to input see Tensor.detach.
Inherited from:
Copies the elements from src into this tensor and returns this.
Copies the elements from src into this tensor and returns this.
The src tensor must be broadcastable with the self tensor. It may be of a different data type or reside on a different device.
Value parameters
if true and this copy is between CPU and GPU, the copy may occur asynchronously with respect to the host. For other cases, this argument has no effect.
the source tensor to copy from
Inherited from:
Returns a new tensor with the sine of the elements of this tensor.
Returns a new tensor with the sine of the elements of this tensor.
Inherited from:
Returns a new Tensor, detached from the current graph.
Returns a new Tensor, detached from the current graph.
The result will never require gradient.
This method also affects forward mode AD gradients and the result will never have forward mode AD gradients.
Inherited from:
Divides each element of this tensor by the corresponding element of other. *
Divides each element of this tensor by the corresponding element of other. *
Inherited from:
Computes element-wise equality
Computes element-wise equality
The argument can be a tensor whose shape is broadcastable with this tensor.
Inherited from:
Computes element-wise equality
Computes element-wise equality
Inherited from:
True if other has the same size and elements as this tensor, false otherwise.
True if other has the same size and elements as this tensor, false otherwise.
Inherited from:
Compares the receiver object (this) with the argument object (that) for equivalence.
Compares the receiver object (this) with the argument object (that) for equivalence.
Any implementation of this method should be an equivalence relation:
• It is reflexive: for any instance x of type Any, x.equals(x) should return true.
• It is symmetric: for any instances x and y of type Any, x.equals(y) should return true if and only if y.equals(x) returns true.
• It is transitive: for any instances x, y, and z of type Any if x.equals(y) returns true and y.equals(z) returns true, then x.equals(z) should return true.
If you override this method, you should verify that your implementation remains an equivalence relation. Additionally, when overriding this method it is usually necessary to override hashCode to
ensure that objects which are "equal" (o1.equals(o2) returns true) hash to the same scala.Int. (o1.hashCode.equals(o2.hashCode)).
Value parameters
the object to compare against this object for equality.
Definition Classes
Inherited from:
Returns the tensor with elements exponentiated.
Returns the tensor with elements exponentiated.
Inherited from:
Returns a new view of this tensor with singleton dimensions expanded to a larger size.
Returns a new view of this tensor with singleton dimensions expanded to a larger size.
Passing -1 as the size for a dimension means not changing the size of that dimension.
Tensor can be also expanded to a larger number of dimensions, and the new ones will be appended at the front. For the new dimensions, the size cannot be set to -1.
Expanding a tensor does not allocate new memory, but only creates a new view on the existing tensor where a dimension of size one is expanded to a larger size by setting the stride to 0. Any
dimension of size 1 can be expanded to an arbitrary value without allocating new memory.
Value parameters
the desired expanded size
More than one element of an expanded tensor may refer to a single memory location. As a result, in-place operations (especially ones that are vectorized) may result in incorrect behavior. If you
need to write to the tensors, please clone them first.
val x = torch.tensor((Seq(Seq(1), Seq(2), Seq(3)))
x.size // [3, 1]
x.expand(3, 4)
x.expand(-1, 4) // -1 means not changing the size of that dimension
Inherited from:
Divides each element of this tensor by the corresponding element of other and floors the result.
Divides each element of this tensor by the corresponding element of other and floors the result.
Inherited from:
Divides each element of this tensor by s and floors the result.
Divides each element of this tensor by s and floors the result.
Inherited from:
This function returns an undefined tensor by default and returns a defined tensor the first time a call to backward() computes gradients for this Tensor. The attribute will then contain the gradients
computed and future calls to backward() will accumulate (add) gradients into it.
This function returns an undefined tensor by default and returns a defined tensor the first time a call to backward() computes gradients for this Tensor. The attribute will then contain the gradients
computed and future calls to backward() will accumulate (add) gradients into it.
Inherited from:
Returns the tensor with elements logged.
Returns the tensor with elements logged.
Inherited from:
Accessing this property is equivalent to calling adjoint().
Accessing this property is equivalent to calling adjoint().
Inherited from:
Returns a view of this tensor with the last two dimensions transposed.
Returns a view of this tensor with the last two dimensions transposed.
x.mT is equivalent to x.transpose(-2, -1).
Inherited from:
Fills elements of self tensor with value where mask is true. The shape of mask must be broadcastable with the shape of the underlying tensor.
Fills elements of self tensor with value where mask is true. The shape of mask must be broadcastable with the shape of the underlying tensor.
Value parameters
the boolean mask
the value to fill in with
Tensor with masked elements set to value
Inherited from:
Returns a tuple (values, indices) where values is the maximum value of each row of the input tensor in the given dimension dim. And indices is the index location of each maximum value found (argmax).
Returns a tuple (values, indices) where values is the maximum value of each row of the input tensor in the given dimension dim. And indices is the index location of each maximum value found (argmax).
If keepdim is true, the output tensors are of the same size as input except in the dimension dim where they are of size 1. Otherwise, dim is squeezed (see :func:torch.squeeze), resulting in the
output tensors having 1 fewer dimension than input.
If there are multiple maximal values in a reduced row then the indices of the first maximal value are returned.
Inherited from:
Returns the maximum value of all elements of this tensor.
Returns the maximum value of all elements of this tensor.
Inherited from:
Returns a new tensor with the negative of the elements of this tensor.
Returns a new tensor with the negative of the elements of this tensor.
Inherited from:
Returns the total number of elements in the input tensor.
Returns the total number of elements in the input tensor.
Inherited from:
Repeats this tensor along the specified dimensions.
Repeats this tensor along the specified dimensions.
Unlike expand, this function copies the tensor’s data.
Value parameters
The number of times to repeat this tensor along each dimension
Inherited from:
Returns a new tensor with the sine of the elements of this tensor.
Returns a new tensor with the sine of the elements of this tensor.
Inherited from:
Returns the sum of all elements of this tensor.
Returns the sum of all elements of this tensor.
Inherited from:
Returns a summary of the contents of this tensor.
Returns a summary of the contents of this tensor.
Value parameters
If true, the summary is flattened to one line. Otherwise, the summary may span multiple lines.
If true, the data type and the shape of the tensor are explicitly included in the summary. Otherwise, they are not.
Maximum number of entries to show for each axis/dimension. If the size of an axis exceeds maxEntries, the output of that axis will be shortened to the first and last three elements. Defaults to
6. Values below 6 are ignored.
Tensor summary.
Inherited from:
Expects input to be <= 2-D tensor and transposes dimensions 0 and 1.
Expects input to be <= 2-D tensor and transposes dimensions 0 and 1.
0-D and 1-D tensors are returned as is. When input is a 2-D tensor this is equivalent to transpose(input, 0, 1).
Inherited from:
Performs Tensor dtype and/or device conversion.
Performs Tensor dtype and/or device conversion.
Inherited from:
Returns a string representation of the object.
Returns a string representation of the object.
The default representation is platform dependent.
Definition Classes
Inherited from:
Returns the sum of the elements of the diagonal of the input 2-D matrix.
Returns the sum of the elements of the diagonal of the input 2-D matrix.
Inherited from:
Returns a tensor that is a transposed version of input (this Tensor). The given dimensions dim0 and dim1 are swapped.
Returns a tensor that is a transposed version of input (this Tensor). The given dimensions dim0 and dim1 are swapped.
If input is a strided tensor then the resulting out tensor shares its underlying storage with the input tensor, so changing the content of one would change the content of the other.
If input is a sparse tensor then the resulting out tensor does not share the underlying storage with the input tensor.
If input is a sparse tensor with compressed layout (SparseCSR, SparseBSR, SparseCSC or SparseBSC) the arguments dim0 and dim1 must be both batch dimensions, or must both be sparse dimensions. The
batch dimensions of a sparse tensor are the dimensions preceding the sparse dimensions.
Value parameters
the first dimension to be transposed
the second dimension to be transposed
the input tensor.
See also
Transpositions which interchange the sparse dimensions of a SparseCSR or SparseCSC layout tensor will result in the layout changing between the two options. Transposition of the sparse dimensions
of a SparseBSR or SparseBSC layout tensor will likewise generate a result with the opposite layout.
Inherited from:
Returns a new tensor with the negative of the elements of this tensor.
Returns a new tensor with the negative of the elements of this tensor.
Inherited from:
Returns a new tensor with a dimension of size one inserted at the specified position.
Returns a new tensor with a dimension of size one inserted at the specified position.
The returned tensor shares the same underlying data with this tensor.
A dim value within the range [-input.dim() - 1, input.dim() + 1) can be used. Negative dim will correspond to unsqueeze applied at dim = dim + input.dim() + 1.
val x = torch.Tensor(Seq(1, 2, 3, 4))
// [[1, 2, 3, 4]]
// [[1],
// [2],
// [3],
// [4]]
Value parameters
the index at which to insert the singleton dimension
Inherited from:
Set tensor value(s) at indices
Set tensor value(s) at indices
val t = torch.zeros(Seq(2, 2))
// set first row to ones
t(Seq(0)) = 1
Inherited from:
Calculates the variance of all elements of this tensor.
Calculates the variance of all elements of this tensor.
Inherited from: | {"url":"http://storch.dev/api/torch/Bits2x4Tensor.html","timestamp":"2024-11-13T03:07:11Z","content_type":"text/html","content_length":"318956","record_id":"<urn:uuid:021dd98b-6d3b-4fb6-b660-c9cbb7a7bfe3>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00074.warc.gz"} |
2D Shapes - Gone App
In geometry, 2d shapes and 3d shapes are explained widely to make you understand the different types of objects you come across in real life. These shapes each have their own patterns and properties.
The shapes can vary depending on many factors such as angles, sides, lengths, heights, widths, volume, and so forth. These 2D and 3D shapes have been taught to us since our primary classes. This
article will cover the various types of two-dimensional shapes.
Table of Contents:
• Definition
• Names
• Circle
• Triangle
• Square
• Rectangle
• Pentagon
• Octagon
• Properties
• 2d and 3d Shapes
2D Shapes Definition
In maths, 2d shapes can be defined as the plane figures that can be drawn on a flat (or plane) surface or a piece of paper. Each 2d shape has different parameters, such as the area and perimeter.
Some 2d shapes have sides and corners while others have curved borders.
2D Shapes Names
1. Circle
2. Triangle
3. Square
4. Rectangle
5. Pentagon
6. Octagon
The basic types of 2d shapes are a circle, triangle, square, rectangle, pentagon, quadrilateral, hexagon, octagon, etc. Polygons are any shape that has sides, except the circle. A regular polygon is
a polygon that has all angles and sides equal. An ellipse can also be considered a non-polygonal shape, even though it includes the circle. Both the ellipse and circle have a curved form, while the
polygons have sides and a closed structure. Let us now discuss a few shapes one-by-one.
A circle is a closed 2d figure in which the set of all the points in the plane is equidistant from a given point called “center”. A radius is the distance from the center of the circle to its outer
line. Real life examples of the circle include wheels, pizzas, orbits, and so on.
A triangle is a three-sided polygon (2d Shape) which has three edges and three vertices. The sum of all the three angles of a triangle is equal to 180°. The best example of a triangular shape is
pyramids. Here are the properties of triangles.
A square is a four-sided polygon (2d Shape), whose four sides are equal in length and all the angles are equal to 90°. It is a regular quadrilateral of two dimensions. The diagonals of the square
also bisect each other at 90°. Square shapes can be used to describe a wall or table with all sides equal.
A rectangle is a 2d shape which has four sides, where the opposite sides are equal and parallel to each other. All the angles of a rectangle are equal to 90°. You can see the rectangle in bricks, TV,
and cardboard. They have length and breadth.
A pentagon is a five-sided polygon (2d Shape), and it can be regular or irregular. In the case of a regular pentagon, each interior angle is equal to 108°, and each exterior angle measures 72°. It is
divided into five diagonals. The pentagon shape is best illustrated by the Pentagon building, which houses the US Department of Defense headquarters.
An octagon is an eight-sided polygon which can be either regular or irregular. It is a 2d form with eight angles. The sum of all the interior angles of an octagon is 1080°. You can see the
octagon-shaped stop sign board on the roadside.
2D Shapes Properties
Go through the below to learn all the properties of 2D shapes.
Area and Perimeter of 2D Shapes
The area is the region covered by a 2d shape on a plane. The areas for different shapes are given below:
2d Shapes and 3d Shapes
We know that 2d shapes are flat figures and 3d shapes are solid figures. Here are some comparisons between these two types.
Solved Examples
Q.1: What is the area of a square that has a side length equal to 4 inches?
Solution: Given, length of side of square = 4 inches
Area of square = side2 = (4) = 16 in2
Q.2: What is the area of a circle whose radius is 7 cm? (π=22/7)
Solution: Given, radius of circle = 7 cm
Area of circle = πr2 = (22/7) x 72 = 22 x 7 = 154 sq.cm.
Q.3: Determine the perimeter of the rectangle with a length and a breadth of 10 cm and 5cm, respectively. Also, determine its area.
Solution: Given,
Length of rectangle = 10 cm
Breadth of rectangle = 5 cm
Area of rectangle = Length x Breadth = 10 x 5 = 50 cm2
Perimeter of rectangle = 2(Length + Breadth) = 2(10+5) = 2 x 15 = 30 cms
Related Links
Learn more about geometrical shapes with us and download BYJU’S – The Learning App to get personalized video content that will interactively explain the concepts of geometry. | {"url":"https://thegoneapp.org/2d-shapes-names/","timestamp":"2024-11-07T21:53:07Z","content_type":"text/html","content_length":"57000","record_id":"<urn:uuid:a5626adc-e737-411b-aff4-57a307e7a57a>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00487.warc.gz"} |
Use Wolfram Alpha to find out everything Google can't tell you
Google is synonymous with search, but it’s not the only option for online queries. You probably know about Bing, and maybe you’ve heard of DuckDuckGo, but there’s also Wolfram Alpha, which aims to
return instant answers to difficult questions instead of a list of matching web pages.
It’s a totally different kind of search engine—covering everything from math formulas to personal health in an accessible way—and here’s what it’s capable of.
Help with your daily life
Use a query like “walking 45 minutes, 4 miles per hour” to work out just how many calories your morning run or walk is going to burn. Those values can be adjusted as necessary, or you can replace
time with distance traveled. Wolfram Alpha will also show you how much fat you can expect to burn and what your estimated heart rate will be.
Part of any healthy lifestyle involves eating right, and if you type the name of any food into the Wolfram Alpha search box, the wealth of data you get back will include the fat, cholesterol, sodium,
and protein it contains. To compare foods, use a comma (“tea, coffee”).
Wolfram Alpha can even help you prepare meals. Try a search like “time to cook a 20-pound turkey” to get an estimate of how long you can expect to leave the bird in the oven. You can also ask the
search engine how many people that turkey, or any other food, can be expected to serve, how big a turkey you’ll need to feed a certain number of people, or how long something will take to thaw out
from frozen.
Continuing with the household theme, Wolfram Alpha can work out mortgage prices with a query like “mortgage $150,000, 6.5% interest, 30 years.” That’ll return monthly payments, total interest paid,
and other bits of information to help you make an educated choice. Google will try to find a website that answers the same question, but Wolfram Alpha presents the data in a much better way.
Get estimates of gas prices too, with something like “16 gallons of gas in New York City,” or “price of gasoline in Dallas,” which will give you historical prices as well as the current one. Want to
know if you’d be better off living somewhere else? Try a query like “cost of living index Las Vegas vs. Los Angeles,” to find out.
Wolfram Alpha can also return information on tire sizes (try asking, “tire size 225/75R16, tire size LT215/85R17” to compare two types), construction material dimensions (“1/2 inch bolt,” for
example), and even the properties of ropes (like “water absorption of 28 millimeter sisal rope”)—so whatever DIY project you’ve got going on, the search engine should be able to lend a hand.
Information about society and culture
By George, look at all those results. David Nield
Interestingly, Wolfram Alpha handles numerous society and culture searches better than Google does. Queries like “famous people named George” or “facts about Rembrandt” will bring up a concise list
of helpful responses.
Type in the name of a TV network, such as ABC or CBS, and Wolfram Alpha will return a host of useful information, from the station’s founding date to its U.S. household reach. It will even throw in
the current logo for good measure. As with most Wolfram Alpha queries, you can compare TV networks by separating them with a comma: “ABC, NBC, CBS, FOX,” for example.
In addition to providing plenty of details on specific celebrities, Wolfram Alpha is also good at finding movies or TV shows that particular people teamed up on. It’ll handle a complex query like
“films with Bill Murray and Owen Wilson and Jason Schwartzman” better than Google, for example.
If you’re more interested in historical events than modern culture, inputting “founding of Carthage, fall of Constantinople,” or whatever events you like, will bring up a visual timeline and a few
key facts about each search term. Click on the individual events to find the dates, countries, and people involved.
Another neat trick Wolfram Alpha can do is figuring out how much a given amount of today’s U.S. dollars would be worth in a historical context, or vice versa. To see what we mean, type in “US$2500
(1950 U.S. dollars)” to see what $2,500 in 1950 would be worth today, or “10,000 current U.S. dollars in 1910” to see what $10,000 today would have been worth more than a century ago.
Wolfram Alpha’s talents even extend to words and linguistics. For instance, use queries such as “_al__la__” to find words that match a particular pattern (each underscore serves as a wildcard) or
“words that rhyme with sight” to find rhymes. You can look up definitions and synonyms just as you can with Google, too.
Get science and technology data
Ah yes, the ol’ Thirlmere Aqueduct unit of measurement. David Nield
Wolfram Alpha does unit conversions just as well as Google, but it lets you dig much deeper. For instance, try a search along the lines of “20 miles + 24 kilometers” to get the answer in miles,
kilometers, centimeters, nautical miles, and even how that distance compares to a marathon race or a Formula One track lap.
Physics queries are well-handled, too, including those involving thermodynamics, mechanical work, gravitational calculations, magnetism, optics, relativity, and centripetal acceleration. Just plug in
the formula you want resolved and let Wolfram Alpha work its magic.
If chemistry is your field of interest, the search engine covers elements, compounds, ions, chemical quantities, and more. A query such as “12 pounds of 4-cyanoindole” returns a long list of
information that includes mass composition, a structural diagram, the chemical name and formula, and even its melting point.
Engineering isn’t left out, so feel free to ask Wolfram Alpha to compute the characteristics of an AC signal and see it plotted on screen (“AC source 110 volts”) or work out where the Hubble
telescope is right now and where it’s headed next (“orbital path of Hubble telescope”). It covers sound and acoustics, too.
The search engine can dig into transportation data as well. Maybe you want to compare flight passenger data, which can be done with a query like “average daily passengers United, Delta,” or maybe you
need to know the “total length of all roads in France” for your next trivia night.
The Wolfram Alpha engine is really good at returning information about materials such as alloys, plastics, minerals, and wood—just type in the name of a material to learn about it. As usual, you can
compare materials by separating the names with commas.
Crunch math and statistics numbers
Throw that graphing calculator in the trash. Actually, don’t, it was pretty expensive. David Nield
Serious mathematicians and number-crunchers should love Wolfram Alpha. The search engine covers a whole host of functions and calculations, whether you need to work out the lengths of the sides of a
triangle or locate the inflection points of a function. Start with the basics of addition and subtraction, then go as deep as you need to.
The plotting and graphics capabilities of Wolfram Alpha are particularly impressive. Queries like “plot x^3 – 6x2 + 4x + 12″ (the function of one variable) and “3D parametric plot (cos t, sin 2t, sin
3t),” (a parametric curve in three dimensions) will produce an on-screen plot that you can analyze further or download to use somewhere else.
Algebra is included too, of course. Wolfram Alpha can solve equations (“solve x^2 + 4x + 6 = 0″), compute the properties of a rational function (“(x2-1)/(x^2+1)”), simplify equations (“simplify
x5-20x^4+163x3-676x^2+1424x-1209″), and plenty more.
There are plenty of statistics-related queries you can try, too. Input something like “mean {21.3, 38.4, 12.7, 41.6}” and Wolfram Alpha will work out the average. You can use something more complex,
like “X~Poisson(7.3), EV[3X4-7],” to compute the expected value of a random variable. Regression analysis and statistical inference are covered, too.
Wolfram Alpha can show step-by-step instructions on a lot of equations as well, so if you have a query along the lines of “60431 / 89,” you can go through the workings one stage at a time. It’s
really useful for learning or revising. This is a Wolfram Alpha Pro feature though, and the more advanced version of the search engine is only available via a $7 per month subscription.
Finally, Wolfram Alpha is smart enough to work out fun calculations, too. Maybe you want to know “how many baseballs fit in a Boeing 747?” or find the answer to some other wild thought. It’s a bit
silly, but it’s yet another example of why you might want to load up Wolfram Alpha rather than Google. | {"url":"https://www.popsci.com/how-to-use-wolfram-alpha/","timestamp":"2024-11-11T17:15:55Z","content_type":"text/html","content_length":"219632","record_id":"<urn:uuid:c7e6de73-e528-4722-a5bc-1b5a83b2a05a>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00524.warc.gz"} |
A Comparative Study of Feature Selection Approaches for Human Activity Recognition Using Multimodal Sensory Data
Punjab University College of Information Technology, University of the Punjab, Lahore 54000, Pakistan
Institute of Medical Informatics, University of Lübeck, Ratzeburger Allee 160, 23538 Lübeck, Germany
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Submission received: 4 March 2021 / Revised: 16 March 2021 / Accepted: 26 March 2021 / Published: 29 March 2021
(This article belongs to the Section
Human activity recognition (HAR) aims to recognize the actions of the human body through a series of observations and environmental conditions. The analysis of human activities has drawn the
attention of the research community in the last two decades due to its widespread applications, diverse nature of activities, and recording infrastructure. Lately, one of the most challenging
applications in this framework is to recognize the human body actions using unobtrusive wearable motion sensors. Since the human activities of daily life (e.g., cooking, eating) comprises several
repetitive and circumstantial short sequences of actions (e.g., moving arm), it is quite difficult to directly use the sensory data for recognition because the multiple sequences of the same activity
data may have large diversity. However, a similarity can be observed in the temporal occurrence of the atomic actions. Therefore, this paper presents a two-level hierarchical method to recognize
human activities using a set of wearable sensors. In the first step, the atomic activities are detected from the original sensory data, and their recognition scores are obtained. Secondly, the
composite activities are recognized using the scores of atomic actions. We propose two different methods of feature extraction from atomic scores to recognize the composite activities, and they
include handcrafted features and the features obtained using the subspace pooling technique. The proposed method is evaluated on the large publicly available CogAge dataset, which contains the
instances of both atomic and composite activities. The data is recorded using three unobtrusive wearable devices: smartphone, smartwatch, and smart glasses. We also investigated the performance
evaluation of different classification algorithms to recognize the composite activities. The proposed method achieved 79% and 62.8% average recognition accuracies using the handcrafted features and
the features obtained using subspace pooling technique, respectively. The recognition results of the proposed technique and their comparison with the existing state-of-the-art techniques confirm its
1. Introduction
Human activity recognition (HAR) aims at verifying the individual’s activity of daily livings (ADLs) from the data captured using various multimodal modalities, e.g., time-series data from motion
sensors, color and/or depth information from video cameras. Lately, HAR-based systems have gained the attention of the research community due to their use in several applications, such as healthcare
monitoring applications, surveillance systems, gaming applications, and anti-terrorist and anti-crime securities. The availability of low cost yet highly accurate motion sensors in mobile gadgets and
wearable sensors, e.g., smartwatches, has also played a vital role in the dramatic development of these applications. The main objective of such a system is to automatically detect and recognize the
human daily life activities from the captured data by creating a predictive model that allows the classification of an individual’s behavior [
]. ADL refers to such tasks or activities that undertake by people in their daily life [
]. An ADL is usually a long term activity that consists of a sequence of small actions as depicted in
Figure 1
. We can describe the long term activities as composite activities, e.g., cooking, playing, etc., whereas the small sequence of actions are known as atomic activities, such as raising an arm or a leg
HAR systems usually follow a standard set of actions in a sequence which includes data capturing, pre-processing, feature extraction, the selection of most discriminant features, and their
classification into the respective classes (i.e., activities) [
] as depicted in
Figure 2
. The first step is the data collection using the setup of wearable sensors, force plates, cameras, etc. The selection of such a sensor is very much dependent on the nature of ADL to be recognized.
In many cases, it is inappropriate to use raw sensor data directly for activity recognition because it may contain noise or irrelevant components. Therefore, data pre-processing techniques, such as
normalization, recovery of missing data, unit conversion, etc., are applied in the second step to make the raw data suitable for further analysis. Third, the high-level features are extracted on this
pre-processed data based on the expert’s knowledge in the considered application domain. Usually, several features are extracted and the most important of them are selected such that they retain the
maximum possible discriminatory information. This process is known as feature selection which not only reduces the dimensions but also reduces the need for storage and computational cost. Finally,
the features are recognized into respective activities using the classifier which provides the separation between the features of different activities in the feature space. It is worth mentioning
that the performance of the HAR system is very much dependent on every step of the sequence.
Numerous researchers have proposed handcrafted features on visual [
] and time series data [
] to detect and recognize human activities. However, there is no guarantee that the engineered handcrafted features work well for all the scenarios. Many factors, such as the nature of input data and
prior knowledge, in the application domain, play an important role to extract the optimal features. Therefore, the researchers are continuously trying to explore more systematic ways to get the
optimal features. Lately, deep learning-based HAR systems, e.g., References [
], have also explored to extract the features using raw data of wearable sensors automatically. These systems exploited Artificial Neural Networks (ANNs), which consist of multiple artificial
neurons, arranged and connected in several layers such that the output of the first layer is forwarded as an input to the second layer, and so forth. That is, each next layer is capable to encode the
low-level descriptors of the previous subsequent layer. Hence, the last layer of the deep ANN provides highly abstract features from the input data. Though the deep learning-based techniques have
demonstrated excellent results on various benchmark datasets, it is, however, quite difficult to rigorously assess the performance of feature learning methods. Despite their good performance, they
need a large amount of training data to tune the hyper-parameters. The lack of knowledge in the implementation of optimal features [
], the selection of the relevant features that represent the ongoing activity [
], and the parameters in the classification techniques [
] make the whole process much complicated.
This paper presents a novel two-level hierarchical method to recognize human activities using a set of wearable sensors. Since the human ADL consists of several repetitive short series of actions, it
is quite difficult to directly use the sensory data for activity recognition because the two or more sequences of the same activity data may have large diversity. However, a similarity can be
observed in the temporal occurrence of the atomic actions. Therefore, the objective of this research is to analyze the recognition score of every atomic action to represents a composite activity. To
solve this problem, we propose a two-level hierarchical model which detects atomic activities at the first level using raw sensory data obtained from multiple wearable sensors, and later the
composite activities are identified using the recognition score of atomic activities at the second level. In particular, the atomic activities are detected from the original sensory data and their
recognition scores are obtained. Later, the features are extracted from these atomic scores to recognize the composite activities. The contribution of this paper is two-fold. First, we propose two
different methods for feature extraction using atomic scores: Handcrafted features, and the features obtained using subspace pooling technique. Second, several experiments are performed to analyze
the impact of different hyper-parameters during the computation of feature descriptors, the selection of optimal features that represent the composite activities, and the evaluation of different
classification algorithms. We used the CogAge dataset [
] to evaluate the performance of our proposed algorithm which contains the data of 7 composite activities performed by 6 different subjects in different time intervals using three wearable devices:
smartphone, smartwatch, and smart glasses. Each of the composite activities can be represented using the combination of 61 atomic activities. We considered each atomic activity as a feature hence, a
feature vector of 61 dimension is used to represent the composite activity. The recognition results are compared with existing state-of-the-art techniques. The recognition results of the proposed
technique and their comparison with the existing state-of-the-art techniques confirm its effectiveness.
The rest of this paper is organized as follows: a brief review of literature on human activity recognition is presented in
Section 2
. The overview of the proposed method is described in
Section 3
. The proposed activity recognition algorithm is presented in
Section 4
. The experimental evaluation through different qualitative and quantitative tools is carried out in
Section 5
. The conclusion is drawn in
Section 6
2. Background and Literature Review
Human activity recognition has gained the interest of the research community in the last two decades due to its several applications in surveillance systems, rehabilitation places, gaming, and
others. This section summarizes the most existing work in the field of human activity recognition using sensor-based time-series data.
A composite activity comprises several atomic actions. Numerous existing techniques focused on identifying the simple and basic human actions, whereas recognizing the composite activities remains an
active problem. The applications in daily life require the identification of high-level human activities which are further composed of smaller atomic activities [
]. This paper summarizes the existing piece of work on hierarchical activity recognition techniques to encode the temporal patterns of composite activities. The techniques proposed in References [
] have concluded that the hierarchical recognition models are effective to recognize human activities. The authors in Reference [
] presented a method to recognize the cooking-related activities in visual data using pose-based, hand-centric and holistic approaches. In the first step, the hand positions and their movements are
detected and then the shape of the knife and vegetable is determined. Lastly, the composite activities are recognized using fine-grained activities. The technique in Reference [
] proposed a hierarchical discriminative model to analyze the composite activities. They employed a predefined sequence of atomic actions and reported improvements in recognition accuracy when the
composite activities are recognized using a hierarchical approach. In Reference [
], a hierarchical model is proposed to recognize the human activities using accelerometer sensor data of smartphone. The technique proposed in Reference [
] employed a two-level classification method to recognize 21 different composite activities using a set of wearable devices on multiple positions of the human body. The authors in Reference [
] presented an approach to detect the primitive actions in the recorded data of composite activities which were used to recognize the ADLs using the limited amount of training data. The technique
proposed in Reference [
] employed a Deep Belief Network to construct the mid-level features which were used to recognize the composite activities. In comparison with the aforementioned techniques, we propose a more generic
and hierarchical technique to recognize the composite activities using the score of underlying atomic activities. In the first step, the score of atomic activities is computed directly from the input
data. Later, the scores of atomic activities are used to recognize the composite activities. The atomic activities are defined manually to make our hierarchical approach more general. Though this
paper mainly emphasizes the recognition of 7 composite activities (available in the selected dataset), we, however, believe that many other composite activities can also be recognized by including
more atomic activities describing the variational movements of the human body, which reflects the generality of the proposed technique.
In Reference [
], the authors reviewed the existing literature on human activities in the aspect that how they are being used in different applications. They concluded that activity recognition in visual data is
not much effective due to the problems of clutter background, partial obstruction, changing of scale, illumination, etc. Lara et al. [
] surveyed the state-of-the-art techniques to recognize human activity using wearable sensors. They explained that obtaining the appropriate information on human actions and behaviors is very
important for their recognition, and it can be efficiently achieved using sensory data. The authors in References [
] have also assessed the use of wearable sensors for human activity recognition. Shirahama et al. [
] proposed that the current spread of mobile devices with multiple sensors may help in the recognition and monitoring of ADL. They employed a codebook-based approach to get the high-level
representation of the input data which was obtained from different sensors of the smartphone. The technique proposed in Reference [
] recognizes the human activities using the sensory data which is obtained from 2-axes of the smartphone accelerometer sensor. This research also concluded the effectiveness and contribution of each
axis of the accelerometer in the recognition of human activities. In comparison with the aforementioned techniques, the proposed technique uses multimodal sensory data which is obtained from three
unobtrusive wearable devices. We show that the fusion of multimodal sensory data provides higher accuracy to recognize the ADL.
In machine learning algorithms, the selection of optimal features plays a vital role to obtain excellent recognition results. Furthermore, in the case of high-dimensional data, the reduction of
dimensions may not only help to improve the recognition accuracies but also to reduce the memory requirement and computational cost [
]. The dimensionality reduction usually can be achieved using two techniques: feature selection and feature extraction [
]. The authors in Reference [
] proposed a hybrid approach by employing both feature selection and feature extraction techniques for dimension reduction. Lately, a few authors, e.g., References [
], proposed feature extraction methods using the subspace pooling technique. The technique proposed in Reference [
] employed singular value decomposition (SVD) for subspace pooling to obtain the optimal set of features from high dimensional data. The authors in Reference [
] extracted a set of features using SVD and the principal singular vectors to encode the feature representation of input data. Zhang et al. [
] also employed SVD for subspace pooling technique in their work. Guyon et al. [
] have discussed multiple methods of feature selection in their research and concluded that clustering and matrix factorization performed best when the dimensions became very large. Similarly, the
techniques proposed in References [
] have also employed SVD for optimal features selection. Song et al. [
] exploits principal component analysis (PCA) to select the most prominent and important features. They concluded that the recognition accuracy remained the same even after reducing the dimensions by
selecting a few components. The authors in References [
] have also used PCA for optimal feature selection.
The researchers have also assessed the performance of different classification algorithms to recognize the ADL. For example, the authors in Reference [
] employed several machine learning algorithms on multivariate data to recognize human activities. They assessed the performance of Random Forest (RF), kNN, Neural Network, Logistic Regression,
Stochastic Gradient Descent, and Naïve Bayes and concluded that Neural Network and logistics regression techniques provide better recognition results. In Reference [
], the authors assessed the different kernels of SVM to recognize the ADL which were recorded using Inertial sensors. The authors in Reference [
] concluded that the selection of kernel function in SVM along with the optimal values of hyperparameters plays a critical role concerning the data. The authors in Reference [
] have also used SVM as a classification tool in their research. Yazdansepas et al. [
] proposed a method to recognize the human activities of 77 subjects. They assessed the performance of different classification algorithms and concluded that the random forest algorithm provides the
best results. The hidden Markov model has been widely used for activity recognition [
]. It is a sequential probabilistic model where a particular discrete random variable describes the state of the process. The technique proposed in Reference [
] employed Conditional Random Fields (CRFs) to encode the sequential characteristics of composite activity. Deep learning-based techniques, e.g., References [
], have also been employed to recognize human activities. Despite their good performance, they need a large amount of training data to tune the hyper-parameters [
]. The ensemble classifier (i.e., combined predictions of several models) have been also employed to recognize the ADL. For example, the authors in Reference [
] presented that ensemble classifiers gave more accurate results than any other single classifier. Mishra et al. [
] explained that the increase in variety and size of data affect the performance of a classifier. They concluded that estimations of more than one classifier (i.e., ensemble classification) are
required to improve the performance. Similarly, the authors in Reference [
] employed an ensemble classifier using a voting technique in the classification of patterns. They formed a few sets of basic classifiers which were trained on different parameters. They combined
their predictions by using a weighted voting technique to get a final prediction. Their research showed that ensemble classifier is quite a promising method, and it might get popular in other
science-related fields. There were many techniques for ensemble classification but the voting-based technique is an efficient one.
In comparison with existing techniques, this paper presents a generic activity recognition method using the subspace pooling technique on the scores of atomic activities which were computed from the
original sensory data. In particular, two different types of features are computed from the atomic scores, and their performance is assessed using four different classifiers with different
parameters. The performance evaluation and its comparison with existing state-of-the-art techniques confirm the effectiveness of the proposed method.
3. Overview of the Proposed Method
The analysis of human activities has gained the attention of the research community due to its use in several daily life applications. Typically these activities are recorded using multiple
electronic devices. In this paper, we used the CogAge dataset [
] which is collected by using three unobtrusive wearable devices: smartphone, smartwatch, and smart glasses. In particular, the LG G5 smartphone was placed in the proband’s front left pocket of the
jeans. The mobile device is used to capture body movement. Second, the Huawei smartwatch was placed on a subject’s left arm to record the movements of the hand. Third, the JINS MEME glasses are worn
by the subject to get the head movement. Since the human activity of daily life consists of several repetitive and concurrent short sequences of actions, they cannot be directly estimated from the
sensory data because the multiple sequences of the same activity data may have large diversity. However, a similarity can be observed in the temporal occurrence of the atomic actions. A two-level
hierarchical method is presented to recognize human activities. First, the scores of the atomic activities are obtained. Secondly, two different types of features are extracted using the score of
atomic actions. The proposed features are evaluated using different classification algorithms. The recognition results of the proposed technique and their comparison with the existing
state-of-the-art techniques are presented.
4. Proposed Method
This section presents the proposed two-level hierarchical model for composite activity recognition. In the first step, we employed a codebook-based approach to get the recognition scores of atomic
activities. Second, different features are extracted on these scores to recognize the composite activities. The description of each of the two steps is outlined in the following subsections.
4.1. Recognition of Atomic Activities
We recall that an ADL is usually a long term activity (i.e., composite activity) and consists of a sequence of small actions (i.e., atomic activities). The recognition scores of the atomic activities
are computed as described in our earlier paper (References [
]) and used as input data to our framework to describe the composite activity. The complete scenario is described in the following to assist the reader.
To recognize the atomic activities, the multi-dimensional data is recorded using the aforementioned sensors. The recognition process consists of two main phases: “Feature extraction” and “training/
test of model”. From each of the sensors, the features are extracted using a codebook-based approach and they are fused into a single high-dimensional feature. The codebook-based approaches compute a
histogram-type feature depicting the frequencies of characteristic subsequences, called codewords, in a sequence [
]. They first construct a codebook by grouping the similar subsequences using any clustering algorithm (we used K-means), whereas the subsequences are collected by following a sliding window
approach. The center of each group is set as “codeword”. Later in the second step, the features are computed on other subsequences by assigning them to the most similar codeword. Therefore, the
resulting feature represents the repetition of each codeword in the sequence (i.e., histogram) as depicted in
Figure 3
. The codebook-based features are extracted from the sequences of each sensor, and they are fused using the strategy of early fusion [
]. It concatenates all the features into a high-dimensional feature vector. That is, each atomic activity is represented as a single high-dimensional feature vector which is used to train the
classification algorithm. After training, the unknown sequences (i.e., test sequences) are passed to the trained model and their atomic scores are obtained which represent the probability of an
atomic activity in the test instance. Since the final feature vector is high-dimensional (after fusion), we used Support Vector Machine (SVM) [
] with a one-versus-all approach due to its effectiveness for high-dimensional data [
]. The model is trained to produce a scoring value between 0 and 1 as an output that represents the score of the atomic activity. That is, the large score means the more likely the example includes
the atomic activity. This score is computed based on the distance between the subsequence example and the classification boundary in the high-dimensional space. We trained
number of SVMs to get the
atomic scores for a testing example. In the online setting, the
number of SVMs calculate the atomic scores for the sequences recorded from different sensors in a certain time interval. Since this paper mainly emphasizes the recognition of composite activities, we
refer to the reader to review the details of feature computation of atomic activities in our earlier paper (References [
]). The details can also be found on the web page of our earlier paper (Reference [
4.2. Recognition of Composite Activities
The objective of the proposed research is to recognize the composite activities using the scores of
atomic activities which were computed by following the method described in
Section 4.1
. Suppose that, for
composite activities, we have
instances, and the
th instance of a composite activity
$X ( k ) ( 1 ≤ k ≤ K )$
has a length of
$T k$
time points. Mathematically, it can be described as,
$X ( k ) = ( x 1 ( k ) , x 2 ( k ) , x 3 ( k ) , … . . x T ( k ) ) ,$
where each
$x i ( k ) ∈ R M$
represents a vector of
atomic scores at given time
$i ( 1 ≤ i ≤ T k )$
$X ( k )$
. The term
$x ( k )$
can be described as,
$x i ( k ) = ( x i 1 ( k ) , x i 2 ( k ) , x i 3 ( k ) , … . , x i j ( k ) , … . . , x i M ( k ) ) ,$
$x i j ( k )$
has the value in the range between 0 and 1 and describes the atomic score of
th atomic activity
$( 1 ≤ j ≤ M )$
, computed at time
$X ( k )$
. Several high-level features are extracted from these atomic scores (computed in
Section 4.1
) to encode the composite activities. In particular, we employed two different feature extraction techniques: handcrafted features and subspace pooling-based features. The detail of each of the
technique is described in the subsequent sections.
4.2.1. Handcrafted Features
Handcrafted feature extraction techniques are simple to implement and lower in computational complexity. They can be computed on time series data either using simple statistical operations (e.g.,
finding maximum, minimum, average, standard deviation, etc.), or more detailed components, such as frequency domain-based features, which are related to the Fourier transform of the signals [
]. We computed 18 handcrafted features on the data of atomic scores in which 15 features are computed using statistical formulation and the rest are based on frequency domain. The list of the
computed features is summarized in
Table 1
, and their description is outlined in the following. These statistical values were computed to every feature dimension separately.
• Maximum: Let X is the feature vector. The $M a x ( X )$ function finds and returns the largest feature value $x i ∈ X$.
• Minimum: The $M i n ( X )$ function finds and returns the smallest feature value $x i ∈ X$.
• Average: For
number of feature values, the average returns the center value of feature vector
. That is,
$A v e r a g e ( X ) = μ = ∑ i = 1 n x i N .$
• Standard Deviation: It describes the amount of disparity in feature vector
$X = { x 1 , x 2 … x N }$
and can be computed using the following formulation:
$S t d e v ( X ) = σ = 1 N ∑ i = 1 N ( x i − μ ) 2 .$
• Zero Crossing: It is used to estimate the difference between a rapid and slow movement of activity [
] and can be calculated by estimating how often the signal value crosses zero in either direction.
• Percentiles: Percentile defines a number where a certain percentage of scores fall below that number. That is, the pth percentile is a value such that, at most, $( 100 × p ) %$ of the
measurements fall below than this value, and $100 × ( 1 − p ) %$ of the measurements fall above this value. For instance, the 25th percentile means that this value is bigger than 25 values and
smaller than 75 feature values. The 25th percentile is also called the first quartile. The 50th percentile is generally the median, and the 75th percentile is also called the third quartile.
• Interquartile Range: The difference between the third and first quartiles is known as the interquartile range.
• Skewness: It calculates the asymmetry of the probability distribution of data about its mean and can be calculated as:
$S k = 1 N σ 3 ∑ i = 1 n ( x i − μ ) 3 .$
• Kurtosis: It is the measure that how heavily the tails of distribution differ from the tails of a normal distribution. The higher value of kurtosis corresponds to the greater extremity of
deviations which refer to outliers [
]. Mathematically, it can be computed as:
$K r = 1 N σ 4 ∑ i = 1 n ( x i − μ ) 4 .$
• Auto-correlation: It measures the degree of similarity between a given time series data and a lagged version of itself over the successive time intervals. That is, it depicts the notch of
similarity between a current feature value and its earlier values [
], and it can be computed as:
$r k = ∑ i = 1 N − k ( x i − μ ) ( x i + k − μ ) ∑ i = 1 N ( x i − μ ) 2 .$
• Order mean values: They are computed from the arranged set (increasing order) of values. That is, the first-order mean is the simple smallest sample value
$x 1$
in an arranged feature set
, the second-order defines the second smallest value
$x 2$
, and so forth [
• Norm values: They are used to estimate the distance of a feature vector from its origin. We used two different measures:
$L 1$
-norm (also known as Manhattan distance) and
$L 2$
-norm (also known as Euclidean norm) [
• Spectral energy: We recall that several sensors are used in the recorded data to analyze human activities, and they can be considered as a function whose amplitude is changing over time. We used
Fourier transformation to transform the time-based signal to its frequency spectrum, and spectral energy formulation is employed to calculate the signal energy distribution across the frequency.
It measures the sum of squared amplitude of frequency content
$ϝ ( n )$
. That is,
$S E = ∑ i = 1 N ϝ ( n ) 2 .$
It can also be computed using normalized frequency spectra. That is,
$ϝ ^ ( n ) = ϝ ( n ) ∑ i = 1 N ϝ ( n ) .$
After the normalization process, the Equation (
) can be described as,
$N S E = ∑ i = 1 N ϝ ^ ( n ) 2 .$
• Spectral entropy: It is based on the concept of the Shannon entropy and is used to measure the spectral of the signal distribution in terms of frequency. The mathematical formulation of spectral
entropy can be described as:
$S E N = − ∑ i = 1 N ϝ ^ ( n ) × log ϝ ^ ( n ) .$
4.2.2. Subspace Pooling
The subspace pooling-based technique aims to model/project the original complex input data into new dimensions such that the basic structure of data can be analyzed easily and accurately [
]. Thus, rather than working on the original input space, further processing will be applied on a more robust representation of data in subspace [
]. Such techniques, e.g., Reference [
], have proven to be effective in extracting the temporal features for activity recognition. Therefore, we also applied subspace pooling techniques on our recorded data using singular value
decomposition (SVD) [
] and principal component analysis (PCA) [
] techniques. SVD is the decomposition of the complex matrix into three simple matrices, whereas PCA defines the orthogonal projection of original data to the new dimensions. They maximize the
variance of the projected data [
] such that the interesting properties of the original data are described. In particular, the objective of these techniques is to reduce the complexity of data by converting the original data to less
complex subspace data with the help of eigenvectors and eigenvalues. They are the well-known techniques of numerical linear algebra system and have been extensively used to reduce the complexities in
original high dimensional data [
Since the real world, data may contain redundant, noisy, and irrelevant features which make the learning process complex; the removal of such features will not only reduce the complexity in the model
learning but also reduce the need for storage and computational cost [
]. Another simple yet powerful use of the SVD and PCA is that they can also be used in the selection of optimal features by discarding the irrelevant information [
]. Therefore, we also employed them in the feature selection process to select the most influential features of original data that are having a greater impact on the recognition of composite
activity. The entire process is explained in the following and depicted in
Figure 4
• Step 1: The aforementioned subspace pooling techniques are applied on original multidimensional data (i.e., the recognition score of the atomic activities).
• Step 2: Eigenvectors of $n × n$ dimensions are extracted using SVD and PCA, where $n = 18 × 61 = 1098$.
• Step 3: The absolute sum of every single column of eigenvectors with dimension $n × 1$ is computed.
• Step 4: The sorting algorithm is applied on the sum of eigenvectors with respect to the index to assess the importance of every feature (obtained after subspace pooling technique). The feature
with the highest sum is considered the most important feature.
• Step 5: The original data (i.e., atomic scores) is arranged with respect to the sorted sum.
4.3. Classification
We evaluated the performance of our proposed features using different classification techniques. A short description of each of the technique is described in the following:
• Support Vector Machine (SVM) [
] is a simple and well-known supervised machine learning algorithm to solve the problem of classification. SVM first maps the training instances into high dimensional space and then extracts a
decision boundary (e.g., hyperplane) between the instances of different classes based on the principle of maximizing the margin (i.e., the distance between the hyperplane and the nearest data
point in each class is maximized). The training instances which are very close to the class boundary are known as support vectors. The training process aims to find such a hyperplane that should
be in the middle of positive and negative instances, and the distance of hyperplane with the nearest positive and negative instances should be maximized [
]. We used simple LibLinear SVM [
], and the optimal value or hyperparameter/regularization parameter
is selected empirically within the range of (
$2 − 6$
$2 6$
• RF [
] is an ensemble learning algorithm that uses multiple individual learners and fuses the results. In particular, it comprises the collection of decision trees with a random subset of data, and
the outputs of all the decision trees are then combined to create the final decision (i.e., recognition). The regularization parameter
is selected empirically in the range of 600–2000 with the increment size of 200.
• The Hidden Markov Model (HMM) [
] has also been employed as a classifier to recognize the human actions [
] using a few unobservable (i.e., hidden) states, and sequences whose behavior depends on the hidden states [
]. The model is usually constructed using a set of states with a stochastic matrix storing the transition information between each state. The elements of such a matrix hold probabilities of
states over time known as transition probabilities. Every state in HMM is associated with a probability density function (PDF) which helps in determining that every state emits a sequence over
time known as observation sequence. The main objective of HMM is to learn the behavior of hidden states based on observable sequences [
• The idea of ensemble classifier has also attracted an increasing amount of attention from the research community due to its effectiveness in the field of pattern recognition [
]. Ensemble classifier aims that the recognition technique not to depend on a decision of a single classification model, rather the final decision is based on the combination of several
individual classifier [
]. It can be built by training several individual classifiers (also known as base learners) and combining their results/estimations by voting (either weighted or unweighted). That is, the
estimations of all classification algorithms are combined so that votes with the highest number (i.e., maximum occurrences) are considered as the final ensemble prediction [
]. It is quite a known fact that ensemble classifiers give more accurate results than the other individual classifiers (base learners) from which they have been built.
5. Experiments and Results
We used the CogAge dataset [
], which contains both atomic and composite activities. The data is recorded using three unobtrusive wearable devices: LG G5 smartphone [
], Huawei smart watch [
] and JINS MEME glasses [
]. The smartphone was placed in the subject’s front left pocket of the jeans and it consists of 5 different sensory modalities: linear accelerometer (all sampled at 200 Hz), gyroscope, magnetometer
(100 Hz), gravity, and 3-axis accelerometer. These sensory modalities are used to record body movement. Specifically, the linear accelerometer provides a three-dimensional sequence that specifies
acceleration forces (excluding gravity) on the three axes. The gyroscope encodes three-dimensional angular velocities. The magnetometer sensor provides a three-dimensional sequence to describe the
intensities of the earth’s magnetic field along the three axes which is quite useful to determine the smartphone’s orientation. The gravity sensing modality also generates a three-dimensional
sequence that encodes the gravity forces on the three axes of the smartphone. The 3-axes accelerometer generates a three-dimensional sequence that specifies acceleration forces (including gravity)
acting on the smartphone. Second, the smartwatch was placed on a subject’s left arm and it consists of two different sensory modalities: gyroscope and 3-axis accelerometer (both sampled at 100 Hz).
Each of these sensing modalities generates a three-dimensional sequence of acceleration and angular velocities on the watch’s x, y, and z axes. These modalities are used to encode hand movements.
Finally, the smart glasses are worn by the subject and it generates 3-dimensional data of accelerometer (sampled at 20 Hz). The accelerometer sensor in the smart glass provides three-dimensional
acceleration information on the glasses’ x, y, and z axes which are used to record the head movement. Thus, the whole setup of activity encoding used eight sensor modalities through these wearable
devices. The movement data of smart glasses and the watch is initially sent to the smartphone via Bluetooth connection and later all the recorded data sent to a home-gateway using a Wi-Fi connection.
The entire process of recording is depicted in
Figure 5
The CogAge dataset contains 9700 instances of 61 different atomic activities obtained from 8 subjects. Among 8, there are 5 subjects who contributed to the collection of 7 composite activities too,
using the aforementioned three wearable devices. There is one subject who only contributed to the collection of composite activities. Therefore, the dataset contains the composite activities of 6
subjects. An android application in a smartphone connects the smartwatch and glasses via Bluetooth, is used to record the composite activities. Thus, the whole recording setup provides a convenient
and natural way for the subject such that he/she can move freely to the kitchen, washroom, or living room with these devices to perform daily life activities. More than 1000 instances of composite
activities are collected, and missing data is removed during the pre-processing phase. Finally, the dataset comprises the 471 instances of left-hand activities (i.e., the activities are mainly
performed using the left hand only) and 281 instances of both hands (i.e., the composite activities are performed using both hands). Therefore, the dataset contains in total of 752 instances of
composite activities, and their description is outlined in
Table 2
The participants in data collection belong to different countries with diverse cultural backgrounds. Thus, their way of performing the same activity is also quite different (e.g., cooking). The
versatility in performing the same activity makes the dataset much complex and a challenging task for the HAR systems to validate the generality of their methods, despite the low number of subjects.
The data for composite activities was collected for training and testing phases separately in different time intervals. The length of every activity is not constant; it differs from 45 s to 5 min
because some activities take a long time to be completed, like preparing food, and, on the other side, some activities take a shorter time, for example, handling medications. The atomic activity
recognition algorithm [
] produced atomic scores after each time interval of approximately 2.5 s. We divided each composite activity into a window of size 45 s, i.e., 18 atomic scores vectors, for each composite activity
instance. The longer instances were divided into multiple windows with a stride size of 6. The data of composite activities are divided into two parts: left-hand and right-hand activities data. Since
the recording of each activity comprises a different number of instances, they are empirically reduced to a fixed number.
5.1. Experimental Setting
We performed three experiments each with three different experimental settings. In each experiment, the dataset is divided into training and testing sets differently. The experiments are performed
using activities data which have been performed from left-hand (i.e., the smartwatch was placed on the subject’s left arm) and using both-hand (i.e., the smartwatch was either placed on a subject’s
left or right arm), separately. The experimental settings are outlined in the following:
• k-fold cross-validation (CV): Training and testing data split on basis of k folds. We set the value of k = 3.
• Hold-out cross-validation: We used the data of 3 subjects for training and the data of the remaining 3 subjects for testing purposes, iteratively.
• Leave-one-out cross-validation: The data of 5 subjects are used for training, and the data of the remaining 1 subject is used for testing purposes, iteratively.
5.1.1. Using Handcrafted Features
We computed 18 features on input data (i.e., atomic scores) as described in
Section 4.2.1
. These 18 features were computed against every column of features set of a single activity, i.e., 18 × total number of columns, and they are concatenated in a single row. Since the input data is
61-dimensional, the dimension of the handcrafted feature is
$1 × ( 18 × 61 )$
(i.e., 1 × 1098). We used SVM and random forest (RF) to evaluate these computed features, and their recognition accuracies are summarized in
Figure 6
5.1.2. Using Subspace Pooling Technique-Based Features
In the second set of experiments, the subspace pooling-based techniques are applied to the input data to project it into new dimensions to get the more robust representation of data in subspace [
]. In particular, we applied SVD and PCA on the input data, and their full-length features are used to recognize the composite activities using SVM and random forest algorithms. It is important to
mention that all the features in new dimensions are used in the classification process.
Figure 7
summarizes the results of experiments using k-fold cross-validation and hold-out cross-validation, whereas the results of leave-one-out cross-validation are summarized in
Figure 8
. It can be observed that the features extracted by using PCA performed quite well as compared to SVD.
5.1.3. Using Optimal Feature Selection
We also assess the effectiveness of the optimal feature selection method on the data extracted using subspace pooling techniques. The features are selected based on the variance of eigenvectors
(i.e., eigenvalues). Since there were 18 sequences in each activity of the original data and the dimension of each row is 61; they are concatenated in a 1-dimensional vector, i.e., $1 × 1098$. The
data of all the activities are arranged and the following steps are performed:
• First, we applied SVD and PCA separately and the matrix of eigenvectors (1098-dimensional) is extracted.
• Second, the sum of absolute values of every column of eigenvectors matrix is calculated, arranged in descending order with respect to its index number, and stored in a separate matrix.
• Third, original combined data was arranged with respect to the sorted sum of absolute eigenvectors.
• Fourth, the dimensions were reduced by iteratively selecting the small sets of features keeping in view the variance of eigenvectors.
• Finally, the selected set of features were evaluated using SVM and RF.
Similar to other experiments in the earlier categories, the
-fold cross-validation is first employed to split the data into training and testing sets, and the optimal features are classified using SVM and random forest.
Table 3
shows the summarized results of composite activity recognition using
-fold cross-validation. In the second set of experiments, the hold-out cross-validation technique is employed, and the activities data of 3 subjects are used in the training set, whereas the rest
activities data of 3 subjects are used in the testing set. Both training and testing sets are arranged according to the sorted absolute sum with their corresponding labels, and the selected features
are classified using SVM and RF.
Table 4
shows the summarized results of composite activity recognition using hold-out cross-validation. Similarly, the results of leave-one-out fold cross-validation are summarized in
Table 5
5.1.4. Using HMM
In HMM classification, the objective is to calculate the probability of hidden states given the observation sequence. Hence, a sequence of observation and 7 models were constructed to find the model
which best describes the observation sequence. In this experimental setting, the Gaussian HMM model with its hyperparameters, i.e., ‘n-components’, ‘covariance type’, ‘starting probabilities’, and
‘transmission matrix’, is used. The model is trained using two methods to compute the observation sequence, and the short detail of each method is described in the following.
In the first method, the sequence of observation is calculated using the leave-one-out cross-validation technique. Since there are 6 subjects in the dataset, the Gaussian HMM model is trained using
the activity data of 5 subjects, whereas the remaining 1 subject is used for testing purposes. We calculate the likelihood information while comparing the testing data to all the predicted
observation sequences. The model with maximum likelihood was assigned to the input testing data. After getting all the models against all the testing data, the accuracy between the original models,
and the estimated or predicted models is calculated. In the second method, the hold-out cross-validation technique is used to calculate the observation sequence. In particular, the activity data of 3
subjects is used in the model training, and the rest of the instances of 3 subjects are used for testing purposes. The maximum likelihood between trained and testing sequences is calculated as
mentioned above.
Figure 9
shows the result of the testing accuracy of both methods. It can be observed that the leave-one-out performed better than the hold-out cross-validation.
5.1.5. Using Ensemble Classifier
Lastly, we also assess the performance of the ensemble classifier to recognize the activities. The idea is to obtain the prediction results from different classification algorithms, and the final
result is computed based on the maximum voting technique. That is, human activities are recognized by combining all the predictions made by other individual classifiers. We employed 5 different
classifiers with different feature representation of the same activity data: (1) SVM classifier using SVD-based features, (2) RF classifier using SVD-based features, (3) SVM classifier using
PCA-based features, (4) RF classifier using PCA-based features, and (5) HMM model. All the models are trained with respective labels using the hold-out cross-validation technique to reduce the effect
of overfitting or underfitting.
To obtain the SVD-based and PCA-based features, the same implementation is adopted as described in
Section 4.2.2
. The SVM classifier is trained with hyperparameter
$C = 1$
, and the RF classifier is used with hyperparameter
$n = 800$
, whereas the Gaussian HMM model is trained using hyperparameters
-component = 6, and covariance type =
$t i e d$
. The label information is gathered from each of the aforementioned 5 learning models and the label with maximum frequency is assigned to the respective instance of testing data.
Figure 10
shows the comparison between the original results of individual classifiers and the ensemble classifier.
5.2. Discussion
This paper presents a technique to recognize the composite activities. We employed two different types of features for activity recognition: Handcrafted features and the features obtained using
subspace pooling techniques. The experiments are carried out using three different settings:
-fold cross-validation, hold-out cross-validation, and leave-one-out cross-validation. A comparative analysis between all the techniques is presented in
Table 6
. It can be observed that handcrafted features perform quite well along with random forest classifiers to recognize the composite activities. Overall, it achieved average recognition accuracy of 79%.
The recognition results of the proposed features are also evaluated with state-of-the-art techniques [
]. A comparison analysis is carried out using two different experiments. Similar to Reference [
], in the first comparative analysis, the technique proposed in Reference [
] employed HMM and reported better recognition results using 3 states and 1000 iteration, whereas we also employed HMM by tuning the model hyperparameters, and the best results are achieved using 4
states, tied covariance matrix, and 1000 iterations. The recognition results in comparison with state-of-the-art techniques are summarized in
Table 7
. Second, the handcrafted features are used to recognize the composite activities using leave-one-out cross-validation. We performed 6 different experiments using handcrafted features, and all of
them show good results. The results are shown in
Table 8
6. Conclusions
In this paper, a two-level hierarchical technique is proposed to recognize human activities using a set of wearable sensors. Since the human activity of daily life consists of several short sequences
of actions (known as atomic activities), they are detected from the original sensory data and their recognition scores (in probabilities) are obtained. Later, the feature representation of composite
activities is obtained from these atomic scores. We present two different methods for feature extraction: handcrafted and subspace pooling-based features. The proposed method is evaluated on a large
public dataset and the recognition results of the composite activities are compared with existing state-of-the-art techniques which confirm its effectiveness.
Author Contributions
Conceptualization, F.A., M.H.K. and M.A.N.; methodology, F.A., M.H.K., M.A.N. and M.G.; software, F.A. and M.H.K.; validation, M.H.K., M.A.N. and M.S.F.; investigation, F.A., M.H.K., M.A.N. and M.G.;
writing—original draft preparation, F.A. and M.H.K.; writing—review and editing, M.H.K., M.A.N. and M.S.F.; supervision, M.H.K., M.A.N. and M.G.; project administration, M.H.K., M.A.N., M.S.F. and
M.G. All authors have read and agreed to the published version of the manuscript.
This research received no external funding.
Conflicts of Interest
The authors declare that they have no conflict of interest.
1. De-La-Hoz-Franco, E.; Ariza-Colpas, P.; Quero, J.M.; Espinilla, M. Sensor-based datasets for human activity recognition–a systematic review of literature. IEEE Access 2018, 6, 59192–59210. [
Google Scholar] [CrossRef]
2. Urwyler, P.; Rampa, L.; Stucki, R.; Büchler, M.; Müri, R.; Mosimann, U.P.; Nef, T. Recognition of activities of daily living in healthy subjects using two ad-hoc classifiers. Biomed. Eng. Online
2015, 14, 54. [Google Scholar] [CrossRef] [PubMed] [Green Version]
3. Khan, M.H. Human Activity Analysis in Visual Surveillance and Healthcare; Logos Verlag Berlin GmbH: Berlin, Germany, 2018; Volume 45. [Google Scholar]
4. Li, F.; Shirahama, K.; Nisar, M.A.; Köping, L.; Grzegorzek, M. Comparison of feature learning methods for human activity recognition using wearable sensors. Sensors 2018, 18, 679. [Google Scholar
] [CrossRef] [Green Version]
5. Ke, S.R.; Thuc, H.L.U.; Lee, Y.J.; Hwang, J.N.; Yoo, J.H.; Choi, K.H. A review on video-based human activity recognition. Computers 2013, 2, 88–131. [Google Scholar] [CrossRef]
6. Lara, O.D.; Labrador, M.A. A survey on human activity recognition using wearable sensors. IEEE Commun. Surv. Tutor. 2012, 15, 1192–1209. [Google Scholar] [CrossRef]
7. Hammerla, N.Y.; Halloran, S.; Plötz, T. Deep, convolutional, and recurrent models for human activity recognition using wearables. arXiv 2016, arXiv:1604.08880. [Google Scholar]
8. Radu, V.; Lane, N.D.; Bhattacharya, S.; Mascolo, C.; Marina, M.K.; Kawsar, F. Towards multimodal deep learning for activity recognition on mobile devices. In Proceedings of the 2016 ACM
International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct, Heidelberg, Germany, 12–16 September 2016; pp. 185–188. [Google Scholar]
9. Koutroumbas, K.; Theodoridis, S. Pattern Recognition; Academic Press: Cambridge, MA, USA, 2008. [Google Scholar]
10. Peng, X.; Wang, L.; Wang, X.; Qiao, Y. Bag of visual words and fusion methods for action recognition: Comprehensive study and good practice. Comput. Vis. Image Underst. 2016, 150, 109–125. [
Google Scholar] [CrossRef] [Green Version]
11. Cai, J.; Luo, J.; Wang, S.; Yang, S. Feature selection in machine learning: A new perspective. Neurocomputing 2018, 300, 70–79. [Google Scholar] [CrossRef]
12. Wang, H.; Kläser, A.; Schmid, C.; Liu, C.L. Dense trajectories and motion boundary descriptors for action recognition. Int. J. Comput. Vis. 2013, 103, 60–79. [Google Scholar] [CrossRef] [Green
13. Durgesh, K.S.; Lekha, B. Data classification using support vector machine. J. Theor. Appl. Inf. Technol. 2010, 12, 1–7. [Google Scholar]
14. Nurhanim, K.; Elamvazuthi, I.; Izhar, L.; Ganesan, T. Classification of human activity based on smartphone inertial sensor using support vector machine. In Proceedings of the 2017 IEEE 3rd
International Symposium in Robotics and Manufacturing Automation (ROMA), Kuala Lumpur, Malaysia, 19–21 September 2017; pp. 1–5. [Google Scholar]
15. Nisar, M.A.; Shirahama, K.; Li, F.; Huang, X.; Grzegorzek, M. Rank Pooling Approach for Wearable Sensor-Based ADLs Recognition. Sensors 2020, 20, 3463. [Google Scholar] [CrossRef]
16. Aggarwal, J.K.; Ryoo, M.S. Human activity analysis: A review. ACM Comput. Surv. (CSUR) 2011, 43, 1–43. [Google Scholar] [CrossRef]
17. Bulling, A.; Blanke, U.; Schiele, B. A tutorial on human activity recognition using body-worn inertial sensors. ACM Comput. Surv. (CSUR) 2014, 46, 1–33. [Google Scholar] [CrossRef]
18. Logan, B.; Healey, J.; Philipose, M.; Tapia, E.M.; Intille, S. A long-term evaluation of sensing modalities for activity recognition. In Proceedings of the International conference on Ubiquitous
Computing 2007, Innsbruck, Austria, 16–19 September 2007; pp. 483–500. [Google Scholar]
19. Rohrbach, M.; Rohrbach, A.; Regneri, M.; Amin, S.; Andriluka, M.; Pinkal, M.; Schiele, B. Recognizing fine-grained and composite activities using hand-centric features and script data. Int. J.
Comput. Vis. 2016, 119, 346–373. [Google Scholar] [CrossRef] [Green Version]
20. Blanke, U.; Schiele, B. Remember and transfer what you have learned-recognizing composite activities based on activity spotting. In Proceedings of the International Symposium on Wearable
Computers (ISWC) 2010, Seoul, Korea, 10–13 October 2010; pp. 1–8. [Google Scholar]
21. Rai, A.; Yan, Z.; Chakraborty, D.; Wijaya, T.K.; Aberer, K. Mining complex activities in the wild via a single smartphone accelerometer. In Proceedings of the Sixth International Workshop on
Knowledge Discovery From Sensor Data, Beijing, China, 12 August 2012; pp. 43–51. [Google Scholar]
22. Bharti, P.; De, D.; Chellappan, S.; Das, S.K. HuMAn: Complex activity recognition with multi-modal multi-positional body sensing. IEEE Trans. Mob. Comput. 2018, 18, 857–870. [Google Scholar] [
23. Nguyen, L.T.; Zeng, M.; Tague, P.; Zhang, J. Recognizing new activities with limited training data. In Proceedings of the 2015 ACM International Symposium on Wearable Computers, Osaka, Japan,
9–11 September 2015; pp. 67–74. [Google Scholar]
24. Nair, H.; Tan, C.; Zeng, M.; Mengshoel, O.J.; Shen, J.P. AttriNet: Learning mid-level features for human activity recognition with deep belief networks. In Proceedings of the Adjunct Proceedings
of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers, London, UK, 11–13 September
2019; pp. 510–517. [Google Scholar]
25. Vrigkas, M.; Nikou, C.; Kakadiaris, I.A. A review of human activity recognition methods. Front. Robot. AI 2015, 2, 28. [Google Scholar] [CrossRef] [Green Version]
26. Attal, F.; Mohammed, S.; Dedabrishvili, M.; Chamroukhi, F.; Oukhellou, L.; Amirat, Y. Physical human activity recognition using wearable sensors. Sensors 2015, 15, 31314–31338. [Google Scholar] [
CrossRef] [PubMed] [Green Version]
27. Jiang, W.; Yin, Z. Human activity recognition using wearable sensors by deep convolutional neural networks. In Proceedings of the 23rd ACM international conference on Multimedia, Brisbane,
Australia, 26–30 October 2015; pp. 1307–1310. [Google Scholar]
28. Zhang, M.; Sawchuk, A.A. USC-HAD: A daily activity dataset for ubiquitous activity recognition using wearable sensors. In Proceedings of the 2012 ACM Conference on Ubiquitous Computing,
Pittsburgh, PA, USA, 5–8 September 2012; pp. 1036–1043. [Google Scholar]
29. Lawal, I.A.; Bano, S. Deep human activity recognition using wearable sensors. In Proceedings of the 12th ACM International Conference on PErvasive Technologies Related to Assistive Environments,
Rhodes, Greece, 1–9 June 2019; pp. 45–48. [Google Scholar]
30. Shirahama, K.; Grzegorzek, M. On the generality of codebook approach for sensor-based human activity recognition. Electronics 2017, 6, 44. [Google Scholar] [CrossRef] [Green Version]
31. Javed, A.R.; Sarwar, M.U.; Khan, S.; Iwendi, C.; Mittal, M.; Kumar, N. Analyzing the effectiveness and contribution of each axis of tri-axial accelerometer sensor for accurate activity
recognition. Sensors 2020, 20, 2216. [Google Scholar] [CrossRef] [Green Version]
32. Khan, M.H.; Farid, M.S.; Grzegorzek, M. A generic codebook based approach for gait recognition. Multimed. Tools Appl. 2019, 78, 35689–35712. [Google Scholar] [CrossRef]
33. Rangarajan, L.; Veerabhadrappa. Bi-level dimensionality reduction methods using feature selection and feature extraction. Int. J. Comput. Appl. 2010, 4, 33–38. [Google Scholar]
34. Zebari, R.; Abdulazeez, A.; Zeebaree, D.; Zebari, D.; Saeed, J. A Comprehensive Review of Dimensionality Reduction Techniques for Feature Selection and Feature Extraction. J. Appl. Sci. Technol.
Trends 2020, 1, 56–70. [Google Scholar] [CrossRef]
35. Li, M.; Wang, H.; Yang, L.; Liang, Y.; Shang, Z.; Wan, H. Fast hybrid dimensionality reduction method for classification based on feature selection and grouped feature extraction. Expert Syst.
Appl. 2020, 150, 113277. [Google Scholar] [CrossRef]
36. Shi, Q.; Luo, H.; Han, J. Subspace Pooling Based Temporal Features Extraction For Audio Event Recognition. In Proceedings of the Interspeech 2019, Graz, Austria, 15–19 September 2019; pp.
3850–3854. [Google Scholar]
37. Zhang, S.; Zhang, Q.; Wei, X.; Wang, P.; Jiao, B.; Zhang, Y. Person Re-identification in Aerial Imagery. arXiv 2019, arXiv:1908.05024. [Google Scholar] [CrossRef] [Green Version]
38. Wei, X.; Zhang, Y.; Gong, Y.; Zheng, N. Kernelized subspace pooling for deep local descriptors. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA,
USA, 18–23 June 2018; pp. 1867–1875. [Google Scholar]
39. Guyon, I.; Elisseeff, A. An introduction to variable and feature selection. J. Mach. Learn. Res. 2003, 3, 1157–1182. [Google Scholar]
40. Salem, N.; Hussein, S. Data dimensional reduction and principal components analysis. Procedia Comput. Sci. 2019, 163, 292–299. [Google Scholar] [CrossRef]
41. Sadou, B.; Lahoulou, A.; Bouden, T.; Avila, A.R.; Falk, T.H.; Akhtar, Z. Blind Image Quality Assessment Using Singular Value Decomposition Based Dominant Eigenvectors for Feature Selection. In
Proceedings of the 5th International Conference on Signal and Image Processing (SIPRO’19), Toronto, ON, Canada, 13–14 July 2019; pp. 233–242. [Google Scholar]
42. D’Addabbo, A.; Papale, M.; Di Paolo, S.; Magaldi, S.; Colella, R.; d’Onofrio, V.; Di Palma, A.; Ranieri, E.; Gesualdo, L.; Ancona, N. SVD based feature selection and sample classification of
proteomic data. In Proceedings of the International Conference on Knowledge-Based and Intelligent Information and Engineering Systems, Zagreb, Croatia, 3–5 September 2008; pp. 556–563. [Google
43. Song, F.; Guo, Z.; Mei, D. Feature selection using principal component analysis. In Proceedings of the 2010 International Conference on System Science, Engineering Design and Manufacturing
Informatization, Yichang, China, 12–14 November 2010; Volume 1, pp. 27–30. [Google Scholar]
44. Malhi, A.; Gao, R.X. PCA-based feature selection scheme for machine defect classification. IEEE Trans. Instrum. Meas. 2004, 53, 1517–1525. [Google Scholar] [CrossRef]
45. Yuce, B.; Mastrocinque, E.; Packianather, M.S.; Pham, D.; Lambiase, A.; Fruggiero, F. Neural network design and feature selection using principal component analysis and Taguchi method for
identifying wood veneer defects. Prod. Manuf. Res. 2014, 2, 291–308. [Google Scholar] [CrossRef]
46. Gulzar, Z.; Leema, A.A.; Malaserene, I. Human Activity Analysis using Machine Learning Classification Techniques. Int. J. Innov. Technol. Explor. Eng. (IJITEE) 2019. [Google Scholar] [CrossRef]
47. Khan, M.H.; Schneider, M.; Farid, M.S.; Grzegorzek, M. Detection of infantile movement disorders in video data using deformable part-based model. Sensors 2018, 18, 3202. [Google Scholar] [
CrossRef] [Green Version]
48. Yazdansepas, D.; Niazi, A.H.; Gay, J.L.; Maier, F.W.; Ramaswamy, L.; Rasheed, K.; Buman, M.P. A multi-featured approach for wearable sensor-based human activity recognition. In Proceedings of the
2016 IEEE International Conference on Healthcare Informatics (ICHI), Chicago, IL, USA, 4–7 October 2016; pp. 423–431. [Google Scholar]
49. Sánchez, D.; Tentori, M.; Favela, J. Activity recognition for the smart hospital. IEEE Intell. Syst. 2008, 23, 50–57. [Google Scholar] [CrossRef] [Green Version]
50. Piyathilaka, L.; Kodagoda, S. Gaussian mixture based HMM for human daily activity recognition using 3D skeleton features. In Proceedings of the 2013 IEEE 8th Conference on Industrial Electronics
and Applications (ICIEA), Melbourne, Australia, 19–21 June 2013; pp. 567–572. [Google Scholar]
51. Cuntoor, N.P.; Yegnanarayana, B.; Chellappa, R. Interpretation of state sequences in HMM for activity representation. In Proceedings of the (ICASSP’05)—IEEE International Conference on Acoustics,
Speech, and Signal Processing, Philadelphia, PA, USA, 18–23 March 2005; Volume 2, p. ii-709. [Google Scholar]
52. Pietrzykowski, M.; Sałabun, W. Applications of Hidden Markov Model: State-of-the-art. Int. J. Comput. Technol. Appl. 2014, 5, 1384–1391. [Google Scholar]
53. Khan, M.H.; Farid, M.S.; Grzegorzek, M. A non-linear view transformations model for cross-view gait recognition. Neurocomputing 2020, 402, 100–111. [Google Scholar] [CrossRef]
54. Opitz, D.; Maclin, R. Popular ensemble methods: An empirical study. J. Artif. Intell. Res. 1999, 11, 169–198. [Google Scholar] [CrossRef]
55. Mishra, S.K. A review of ensemble technique for improving majority voting for classifier. Int. J. 2013, 3, 177–180. [Google Scholar]
56. Shen, H.B.; Chou, K.C. Ensemble classifier for protein fold pattern recognition. Bioinformatics 2006, 22, 1717–1722. [Google Scholar] [CrossRef]
57. Khan, M.H.; Farid, M.S.; Grzegorzek, M. Using a generic model for codebook-based gait recognition algorithms. In Proceedings of the 2018 International Workshop on Biometrics and Forensics (IWBF),
Sassari, Italy, 7–8 June 2018; pp. 1–7. [Google Scholar]
58. Wang, L. Support Vector Machines: Theory and Applications; Springer Science & Business Media: New York, NY, USA, 2005; Volume 177. [Google Scholar]
59. Khan, M.H.; Farid, M.S.; Grzegorzek, M. Spatiotemporal features of human motion for gait recognition. Signal Image Video Process. 2019, 13, 369–377. [Google Scholar] [CrossRef]
60. Rank Pooling Approach for Wearable Sensor-Based ADLs Recognition. Available online: https://www.info.kindai.ac.jp/~shirahama/rank_pooling (accessed on 1 January 2021).
61. Cook, D.J.; Krishnan, N.C. Activity Learning: Discovering, Recognizing, and Predicting Human Behavior From Sensor Data; John Wiley & Sons: Hoboken, NJ, USA, 2015. [Google Scholar]
62. Esmael, B.; Arnaout, A.; Fruhwirth, R.K.; Thonhauser, G. A statistical feature-based approach for operations recognition in drilling time series. Int. J. Comput. Inf. Syst. Ind. Manag. Appl. 2015
, 5, 454–461. [Google Scholar]
63. Box, G.E.; Jenkins, G.M. Time Series Analysis: Forecasting and Control San Francisco; Wiley: Hoboken, NJ, USA, 1976. [Google Scholar]
64. Order Statistics: Simple Definition, Examples—Statistics How to. Available online: https://www.statisticshowto.com/order-statistics/ (accessed on 15 January 2021).
65. How Statistical Norms Improve Modeling|by Madeline Schiappa|Towards Data Science. Available online: https://towardsdatascience.com/norms-penalties-and-multitask-learning-2f1db5f97c1f (accessed on
15 January 2021).
66. Zhai, H.; Zhang, H.; Xu, X.; Zhang, L.; Li, P. Kernel sparse subspace clustering with a spatial max pooling operation for hyperspectral remote sensing data interpretation. Remote Sens. 2017, 9,
335. [Google Scholar] [CrossRef] [Green Version]
67. Fernando, B.; Habrard, A.; Sebban, M.; Tuytelaars, T. Unsupervised visual domain adaptation using subspace alignment. In Proceedings of the IEEE International Conference on Computer Vision,
Sydney, Australia, 1–8 December 2013; pp. 2960–2967. [Google Scholar]
68. Klema, V.; Laub, A. The singular value decomposition: Its computation and some applications. IEEE Trans. Autom. Control 1980, 25, 164–176. [Google Scholar] [CrossRef] [Green Version]
69. Van Loan, C.F.; Golub, G.H. Matrix Computations; Johns Hopkins University Press: Baltimore, MD, USA, 1983. [Google Scholar]
70. Hotelling, H. Analysis of a complex of statistical variables into principal components. J. Educ. Psychol. 1933, 24, 417. [Google Scholar] [CrossRef]
71. Yam, Y.; Baranyi, P.; Yang, C.T. Reduction of fuzzy rule base via singular value decomposition. IEEE Trans. Fuzzy Syst. 1999, 7, 120–132. [Google Scholar]
72. Bolón-Canedo, V.; Sánchez-Maroño, N.; Alonso-Betanzos, A. A review of feature selection methods on synthetic data. Knowl. Inf. Syst. 2013, 34, 483–519. [Google Scholar] [CrossRef]
73. Fan, R.E.; Chang, K.W.; Hsieh, C.J.; Wang, X.R.; Lin, C.J. LIBLINEAR: A library for large linear classification. J. Mach. Learn. Res. 2008, 9, 1871–1874. [Google Scholar]
74. Khan, M.H.; Helsper, J.; Boukhers, Z.; Grzegorzek, M. Automatic recognition of movement patterns in the vojta-therapy using RGB-D data. In Proceedings of the 2016 IEEE International Conference on
Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 1235–1239. [Google Scholar]
75. Breiman, L. Bagging predictors. Mach. Learn. 1996, 24, 123–140. [Google Scholar] [CrossRef] [Green Version]
76. Rabiner, L.R. A tutorial on hidden Markov models and selected applications in speech recognition. Proc. IEEE 1989, 77, 257–286. [Google Scholar] [CrossRef]
77. Kellokumpu, V.; Pietikäinen, M.; Heikkilä, J. Human Activity Recognition Using Sequences of Postures. In Proceedings of the IAPR Conference on Machine Vision Application 2005, Tsukuba Science
City, Japan, 16–18 May 2005; pp. 570–573. [Google Scholar]
78. Kolekar, M.H.; Dash, D.P. Hidden markov model based human activity recognition using shape and optical flow based features. In Proceedings of the 2016 IEEE Region 10 Conference (TENCON),
Singapore, 22–25 November 2016; pp. 393–397. [Google Scholar]
79. Stikic, M.; Huynh, T.; Van Laerhoven, K.; Schiele, B. ADL recognition based on the combination of RFID and accelerometer sensing. In Proceedings of the 2008 Second International Conference on
Pervasive Computing Technologies for Healthcare, Tampere, Finland, 30 January–1 February 2008; pp. 258–263. [Google Scholar]
80. Melnikoff, S.J.; Quigley, S.F.; Russell, M.J. Implementing a hidden Markov model speech recognition system in programmable logic. In Proceedings of the International Conference on Field
Programmable Logic and Applications, Belfast, UK, 27–29 August 2001; pp. 81–90. [Google Scholar]
81. Rodriguez, J.J.; Kuncheva, L.I.; Alonso, C.J. Rotation forest: A new classifier ensemble method. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28, 1619–1630. [Google Scholar] [CrossRef]
82. Kittler, J.; Hatef, M.; Duin, R.P.; Matas, J. On combining classifiers. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 20, 226–239. [Google Scholar] [CrossRef] [Green Version]
83. Onan, A.; Korukoğlu, S.; Bulut, H. A multiobjective weighted voting ensemble classifier based on differential evolution algorithm for text sentiment classification. Expert Syst. Appl. 2016, 62,
1–16. [Google Scholar] [CrossRef]
84. LG G5 Smart Phones. Available online: https://www.lg.com/us/g5-phones/ (accessed on 29 March 2021).
85. HUAWEI: SmartWatches. Available online: https://consumer.huawei.com/en/wearables/ (accessed on 29 March 2021).
86. JINS MEME: Eyewear that Sees Your EVERYDAY. Available online: https://jins-meme.com/en/ (accessed on 29 March 2021).
Figure 1. Hierarchical model to recognize human activities. In the first step, the atomic activities are detected from the original sensory data, and their recognition scores are obtained, which are
used to recognize the composite activities in the second step.
Figure 3.
The depiction of codebook-based feature extraction process: (
) Codebook construction by grouping the similar subsequences using k-means clustering algorithm. The center of each group is set as “codeword”. (
) Features are computed on each of the subsequences by assigning them to the most similar codeword [
Figure 5.
Activity recording setup using three wearable devices. The movement data of smart glasses and watch is initially sent to smartphone via Bluetooth connection and later all the recorded data sent to a
home-gateway using Wi-Fi connection [
Figure 6. The composite activity recognition using handcrafted features with Support Vector Machine (SVM) and Random Forest (RF). (a) K-fold cross-validation, (b) hold-out cross-validation, and (c)
leave-one-out cross-validation. In k-fold cross-validation, the training and testing data is split based on k folds. We set the value of k = 3. In hold-out cross-validation, the data of 3 subjects
are used for training, and the data of the remaining 3 subjects is used for testing purposes, iteratively. In leave-one-out cross-validation, the data of 5 subjects are used for training, and the
data of the remaining 1 subject is used for testing purposes, iteratively.
Figure 7. The composite activity recognition using features derived from the subspace pooling technique. The features are classified using Support Vector Machine (SVM) and Random Forest (RF). (a) K
-fold cross-validation; (b) hold-out cross-validation.
Figure 8. The composite activity recognition using features derived from the subspace pooling technique. The features are classified using Support Vector Machine (SVM) and Random Forest (RF). (a)
Leave-one-out cross-validation: subspace pooling using singular value decomposition (SVD), and (b) leave-one-out cross-validation: subspace pooling using principal component analysis (PCA).
Maximum Skewness
Minimum Kurtosis
Average Auto-correlation
Standard-deviation First-Order Mean (FOM)
Zero crossing Norm of FOM
Percentile 20 Second-order mean (SOM)
Percentile 50 Norm of SOM
Percentile 80 Spectral energy
Interquartile Spectral entropy
Table 2.
Details of composite activities in the CogAge dataset [
]. The left hand represents the number of instances in which the activities are mainly performed using the left hand only, whereas, in the both hands setting, the activities are performed using both
Total subjects: 6
Total activities: 7
Activities: Brushing teeth
Cleaning room
Handling medications
Preparing food
Styling hair
Using telephone
Washing hands
Feature representation: 61-dimensional
Total number of instances: 752
Left hand instances: 471
Right hand instances: 281
Table 3. The composite activity recognition using a set of optimal features derived from the subspace pooling technique. The feature are classified with k-fold (where $k = 3$) cross-validation.
The average recognition accuracy with k-fold cross-validation is also presented.
SVD PCA
All Hands Left Hand All Hands Left Hand
Accuracy (SVM) 47.04% 53.00% 47.04% 53.00%
Accuracy (RF) 46.00% 53.00% 46.00% 52.00%
Table 4. The composite activity recognition using a set of optimal features derived from the subspace pooling technique. The features are classified with hold-out cross-validation. The average
recognition accuracy with hold-out cross-validation is also presented.
SVD PCA
All Hands Left Hand All Hands Left Hand
Accuracy (SVM) 51.40% 54.01% 51.40% 54.67%
Accuracy (RF) 53.12% 55.99% 53.37% 55.99%
Table 5. The composite activity recognition using a set of optimal features derived from the subspace pooling technique. The features are classified with leave-one-out cross-validation. The average
recognition accuracy with leave-one-out cross-validation is also presented.
SVD PCA
All Hands Left Hand All Hands Left Hand
Accuracy (SVM) 64.93% 60.61% 58.31% 60.33%
Accuracy (RF) 61.53% 62.50% 62.05% 63.00%
Table 6. The comparative analysis of all the feature extraction techniques along with the classification algorithms. In all the experiments of leave-one-out cross-validation, the average recognition
accuracies are reported here.
Handcrafted feature extraction
Leave-one-out CV + SVM 48.9%
Leave-one-out CV + RF 79%
Subspace pooling
PCA + Leave-one-out CV + RF 62.8%
Optimal feature selection
SVD+ Leave-one-out CV + SVM 62.8%
Leave-one-out CV 64%
Method Baseline Proposed
Hidden Markov Model + Holdout CV 51.20% [15] 51.2%
Hidden Markov Model + Leave-one-out CV 61.01% [15] 64%
Table 8. The comparison of the proposed handcrafted features with different state-of-the-art techniques.
Proposed Handcrafted Features
SVM RF
k-Fold Hold-Out Leave-One-Out k-Fold Hold-Out Leave-One-Out
42% 35% 48.9% 66% 72% 79%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http:/
Share and Cite
MDPI and ACS Style
Amjad, F.; Khan, M.H.; Nisar, M.A.; Farid, M.S.; Grzegorzek, M. A Comparative Study of Feature Selection Approaches for Human Activity Recognition Using Multimodal Sensory Data. Sensors 2021, 21,
2368. https://doi.org/10.3390/s21072368
AMA Style
Amjad F, Khan MH, Nisar MA, Farid MS, Grzegorzek M. A Comparative Study of Feature Selection Approaches for Human Activity Recognition Using Multimodal Sensory Data. Sensors. 2021; 21(7):2368. https:
Chicago/Turabian Style
Amjad, Fatima, Muhammad Hassan Khan, Muhammad Adeel Nisar, Muhammad Shahid Farid, and Marcin Grzegorzek. 2021. "A Comparative Study of Feature Selection Approaches for Human Activity Recognition
Using Multimodal Sensory Data" Sensors 21, no. 7: 2368. https://doi.org/10.3390/s21072368
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
Article Metrics | {"url":"https://www.mdpi.com/1424-8220/21/7/2368","timestamp":"2024-11-03T06:59:48Z","content_type":"text/html","content_length":"530705","record_id":"<urn:uuid:1d09a95e-7c95-4bc5-826d-ef939919b73a>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00587.warc.gz"} |
Spherical math
Spherical math
Low-level utilities for spherical geometry.
Source · Returns the spherical area of the specified GeoJSON object in steradians. This is the spherical equivalent of path.area.
Source · Returns the spherical bounding box for the specified GeoJSON object. The bounding box is represented by a two-dimensional array: [[left, bottom], [right, top]], where left is the minimum
longitude, bottom is the minimum latitude, right is maximum longitude, and top is the maximum latitude. All coordinates are given in degrees. (Note that in projected planar coordinates, the minimum
latitude is typically the maximum y-value, and the maximum latitude is typically the minimum y-value.) This is the spherical equivalent of path.bounds.
Source · Returns the spherical centroid of the specified GeoJSON object. This is the spherical equivalent of path.centroid.
geoDistance(a, b)
Source · Returns the great-arc distance in radians between the two points a and b. Each point must be specified as a two-element array [longitude, latitude] in degrees. This is the spherical
equivalent of path.measure given a LineString of two points.
Source · Returns the great-arc length of the specified GeoJSON object in radians. For polygons, returns the perimeter of the exterior ring plus that of any interior rings. This is the spherical
equivalent of path.measure.
geoInterpolate(a, b)
Source · Returns an interpolator function given two points a and b. Each point must be specified as a two-element array [longitude, latitude] in degrees. The returned interpolator function takes a
single argument t, where t is a number ranging from 0 to 1; a value of 0 returns the point a, while a value of 1 returns the point b. Intermediate values interpolate from a to b along the great arc
that passes through both a and b. If a and b are antipodes, an arbitrary great arc is chosen.
geoContains(object, point)
Source · Returns true if and only if the specified GeoJSON object contains the specified point, or false if the object does not contain the point. The point must be specified as a two-element array [
longitude, latitude] in degrees. For Point and MultiPoint geometries, an exact test is used; for a Sphere, true is always returned; for other geometries, an epsilon threshold is applied.
Source · Returns a rotation function for the given angles, which must be a two- or three-element array of numbers [lambda, phi, gamma] specifying the rotation angles in degrees about each spherical
axis. (These correspond to yaw, pitch and roll.) If the rotation angle gamma is omitted, it defaults to 0. See also projection.rotate.
Source · Returns a new array [longitude, latitude] in degrees representing the rotated point of the given point. The point must be specified as a two-element array [longitude, latitude] in degrees.
Source · Returns a new array [longitude, latitude] in degrees representing the point of the given rotated point; the inverse of rotation. The point must be specified as a two-element array [longitude
, latitude] in degrees. | {"url":"https://d3js.org/d3-geo/math","timestamp":"2024-11-02T09:08:32Z","content_type":"text/html","content_length":"85525","record_id":"<urn:uuid:e3384d1a-ec69-460d-958d-73ea63ef3105>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00615.warc.gz"} |
Teacher access
Request a demo account. We will help you get started with our digital learning environment.
Student access
Is your university not a partner? Get access to our courses via
Pass Your Math
independent of your university. See pricing and more.
Or visit
if jou are taking an OMPT exam. | {"url":"https://app.passyourmath.com/courses/theory/291/2133/32669/en","timestamp":"2024-11-09T06:50:57Z","content_type":"text/html","content_length":"80573","record_id":"<urn:uuid:183aadcc-1e27-467d-8cb2-af9219fa9648>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00689.warc.gz"} |
Summary - Algebra | Chapter 3 | 8th Maths
• When the product of two algebraic expressions we follow,
Multiply the signs of the terms,
Multiply the corresponding co-efficients of the terms.
Multiply the variable factors by using laws of exponents.
• For dividing a polynomial by a monomial, divide each term of the polynomial by a monomial.
• Identity: An identity is an equation is satisfied by any value that replaces its variables (s).
(a + b)^2 = a ^2 + 2ab + b^2
( x + a)( x + b) = x ^2 + (a + b)x + ab
(a - b)^2 a ^2 - 2ab b^2
(a + b)^3 = a ^3 + 3a^2b + 3ab ^2 + b^3
a ^2 − b^2 = ( a + b)( a − b)
(a − b)^3 = a ^3 − 3a^2b + 3ab ^2 − b^3
( x + b )( x + b)( x + c) = x ^3 + ( a + b + c)x ^2 + ( ab + bc + ca )x + abc
• Factorisation: Expressing an algebraic expression as the product of two or more expression is called Factorisation.
• An equation containing only one variable with its highest power as one is called a linear equation in one variable.
• The value which replaces a variable in an equation so as to make the two sides of the equation equal is called a solution or root of the equation.
• Graphing is just a visual method for showing relationships between numbers.
• The horizontal line is named as XOX’, called the X-axis. The vertical line is named as YOY’, called the Y-axis. Both the axes are called coordinate axes. The plane containing the x axis and the y
axis is known as the coordinate plane or the Cartesian plane.
• A point is denoted by a pair (a,b) of two numbers ‘a’ and ‘b’ listed in a specific order in which ‘a’ represents the distance along the X-axis and ‘b’ represents the distance along the Y axis. It
is called an ordered pair (a,b).
• The coordinate axes divide the plane of the graph into four regions called quadrants.
• The line graph for the linear equation is called a linear graph.
Through this activity you will know about the Algebra, operations on them and study their properties as well.
Step-1 Open the Browser and type the URL given below.
Step-2 Click on any one of the link in the items to know about the basics in algebra, exponents, polynomial, quadric equation etc.
Step-3 For example, click on the “Balance While adding and subtracting”, link under the Basic menu. A new tab will open in the browser where you can see the interactive game on adding and subtracting
Step-4 Likewise you can learn all the concepts in algebra.
Web URL Algebra:
*Pictures are indicatives only.
*If browser requires, allow Flash Player or Java Script to load the page.
Expected Outcome
Step-1 Open the Browser type the URL Link given below (or) Scan the QR Code. GeoGebra work book named “ALGEBRA” will open. Click on the worksheet named “Point Plotting”.
Step-2 In the given worksheet you can get new point by clicking on “New point”.Enter the correct point in the input box and press enter.
Browse in the link
https://www.geogebra.org/m/fqxbd7rz#chapter/409574 or Scan the QR Code. | {"url":"https://www.brainkart.com/article/Summary_44351/","timestamp":"2024-11-10T00:03:41Z","content_type":"text/html","content_length":"61880","record_id":"<urn:uuid:1b38a6c5-62a2-47f1-90b9-2a8b22171629>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00531.warc.gz"} |
Fix potential stack overflow related to `Pretty N`. (!286) · Merge requests · Iris / stdpp · GitLab
Fix potential stack overflow related to `Pretty N`.
As reported by @simongregersen at https://coq.zulipchat.com/#narrow/stream/237977-Coq-users/topic/Stack.20overflow.20in.20Qed.2E, lemmas involving Pretty N could lead to stack overflow. I minimized
his problem as follows:
Lemma test_no_stack_overflow p n :
get n (pretty (N.pos p)) ≠ Some "_"%char →
get (S n) ("-" +:+ pretty (N.pos p)) ≠ Some "_"%char.
Proof. intros Hlem. apply Hlem. (* stack overflow *) Qed.
The problem is that Coq's conversion unfolds too much, and triggers the wf_guard 32 in:
Definition pretty_N_go (x : N) : string → string :=
pretty_N_go_help x (wf_guard 32 N.lt_wf_0 x).
The wf_guard is needed to make sure that computation of pretty n for concrete numbers n works (see tests in tests/pretty.v). However, due to concrete number 32, which adds 2 ^ n Acc_intro
constructors to the opaque accessibility proof N.lt_wf_0 for the well-founded recursion, Coq's conversion might unfold wf_guard 32 too eagerly.
I hence changed the 32 into S (N.size_nat x), which causes the tests in tests/pretty.v to still work, and the stack overflow to disappear. The key idea is that S (N.size_nat x) is not a concrete
number if x is an open term, thus preventing wf_guard from unfolding.
Merge request reports | {"url":"https://gitlab.mpi-sws.org/iris/stdpp/-/merge_requests/286","timestamp":"2024-11-10T06:04:15Z","content_type":"text/html","content_length":"68105","record_id":"<urn:uuid:77c12674-d680-48b6-beb9-de1226bf88df>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00466.warc.gz"} |
GPU Kernels for Block-Sparse Weights
Scott Gray, Alec Radford and Diederik P. Kingma
We’re releasing highly optimized GPU kernels for an underexplored class of neural network architectures: networks with block-sparse weights. The kernels allow for efficient evaluation and
differentiation of linear layers, including convolutional layers, with flexibly configurable block-sparsity patterns in the weight matrix. We find that depending on the sparsity, these kernels can run
orders of magnitude faster than the best available alternatives such as cuBLAS. Using the kernels we improve upon the state-of-the-art in text sentiment analysis and generative modeling of text and
images. By releasing our kernels in the open we aim to spur further advancement in model and algorithm design.
1 Introduction
Research in the field of contemporary deep learning [LeCun et al., 2015] is largely constrained by the availability of efficient GPU kernels for defining, evaluating and differentiating various model
architectures. Only a small number of types of linear operations currently have efficient GPU implementations; threelinearoperationsthatcurrentlyenjoyefficientGPUimplementationsaredense dot products,
convolutional operations and (most recently) depth-wise operations. Such operations have two variables as inputs: one input is usually a layer of network activations (corresponding to the current
minibatch of datapoints), and another input that is usually the set of learned weights for that layer. For dense linear operations, these weights are a dense matrix or a dense higher-dimensional
tensor. Two dimensions of these weights correspond to the so-called feature dimensions, whose lengths equal the so-called widths of the input and output layers of the operations. Such dense linear
operations do not scale well in the feature dimensions, since the number of weights is proportional to both the number of input features, and the number of output features. Linearly scaling up the
number of input and output features, results in a quadratic increase in the total number of weights, and a quadratic increase in the computational cost.
Ideally, we would have efficient operations that allow for sparsity in the two feature dimensions. With sparsity, we mean that the value of a subset of weights are specified to be exactly zero. If a
weight is zero, then the linear operation associated with that weight can be skipped, since any value times zero equals zero. Therefore, the computational cost of sparse linear operations is only
proportional to the number of non-zero weights. A problem of operations with weights with arbitrary sparsity, is that they cannot be efficiently implemented on contemporary GPUs. GPUs consist of
thousands of computational cores that all operate in parallel. This restricts us to the set of operations that allow for a high degree of parallelizability, which does not include operations with
arbitrary sparsity patterns.
However, we found that highly optimized block-sparse operations, with block sizes as small as 8 8, can still run efficiently on contemporary GPUs. See figure 1 which explains block-sparse connectivity.
WeintroducehighlyoptimizedGPUimplementationsofvariousblock-sparseoperations. The operations come in roughly two flavors: (1) block-sparse matrix multiplications, and (2) block-sparse convolutional
operations. The kernels and their main documentation can be found on GitHub 1. Please refer to this GitHub page for more information on the API.
Figure 1: Visualization of random dense and random block-sparse weight matrices, where white indicates a weight of zero. Our new kernels allow efficient usage of block-sparse weights in fully
connected and convolutional layers, as illustrated in the middle figure. For convolutional layers, the kernels allow for sparsity in input and output feature dimensions; the connectivity is still
dense in the spatial dimensions. The sparsity is defined at the level of blocks (right figure), with block size of at least 8 × 8. At the block level, the sparsity pattern is completely configurable.
Since the kernels skip computations of blocks that are zero, the computational cost is only proportional to the number of weights, not the number of input/output features.
Figure 2: Dense linear layers (left) can be replaced with layers that are sparse and wider (center) or sparse and deeper (right) while approximately retaining computational cost and memory cost. Note
these costs are, in principle, proportional to the number of non-zero weights (edges). The shown networks have an equal number of edges. However, the sparse and wide network has the potential
advantage of a larger information bandwidth, while the deeper network has the potential benefit of fitting nonlinear functions.
Block-sparsity unlocks various research directions (see section 6). One application we explore in experiments is the widening or deepening of neural networks, while increasing sparsity, such that the
computational cost remains approximately equal as explained in figure 2. In experiments we have only scratched the surface of the applications of block-sparse linear operations; by releasing our
kernels in the open, we aim to spur further advancement in model and algorithm design.
2 Capabilities
The two main components of this release are a block-sparse matrix multiplication kernel and a block-sparse convolution kernel. Both are wrapped in Tensorflow [Abadi et al., 2016] ops for easy use and
the kernels are straightforward to integrate into other frameworks, such as PyTorch.
Both kernels support an arbitrary block size and are optimized for 8×8, 16×16, and 32×32 block sizes. The matrix multiplication kernel supports an arbitrary block layout which is specified via a
masking matrix. In addition, the feature axis is configurable. The convolution kernel supports non-contiguous input/output feature blocks of any uniform or non-uniform size specified via a configuration
format (see API) though multiples of 32×32 perform best. Arbitrary dense spatial filter sizes are supported in addition to dilation, striding, padding, and edge biasing.
A variety of efficient helper ops are included for common routines such as layer and batch normalization of activations, L2 normalization of weights, dropout, activation functions, and elementwise
Since sparse networks allow for much larger activation tensors than dense networks, operations tend to be bandwidth bound instead of compute bound on GPU hardware. Reduced precision formats lower
bandwidth significantly which helps alleviate this problem. To this end, the kernels support fp16 in addition to fp32 with additional compact formats such as bfloat16 in active development.
3 Benchmarks
3.1 Performance (GFLOPS) compared to cuBLAS and cuSPARSE kernels
Figure 3: Empirical speed-ups, in terms of relative GFLOPS, of block-sparse matrix multiplication with a 12288 12288 weight matrix, a minibatch size of 32, and a block size of 32. We compare against
cuBLAS (CUDA 8) matrix multiplication. Other baselines typically faired worse than cuBLAS.
In order to verify the efficiency of our proposed kernels, we compare against three baseline techniques for linear layers with block-sparse weights. For all cases, we tested on a NVIDIA Pascal Titan
X GPU, with minibatch size 32 and block size 3232.
The first baseline technique is the naïve use of cuBLAS kernels with sparse weight matrices. Since this technique does not ’skip’ blocks of weights whose values are 0, the computational complexity is
proportionaltothetotalnumberofentriesinthematrix,notthenumberofnon-zeroblocks. Therefore, this technique performs a lot of unnecessary operations. See figure 3 for the relative speed of our kernel,
compared to this technique. For higher degrees of sparsity, we see as expected a speedup factor close to 1/1 s/100, where s is the sparsity percentage.
We also compared against baselines of (1) block-sparse matrix multiplication through performing a sequence of small per-block matrix multiplications with cuBLAS, and (2) block-sparse matrix
mult-plication using the cuSPARSE library. Unlike the previous baseline, the computational complexities of these methods are only proportional to the number of non-zero blocks. Still, in our
experiments, these baselines faired worse than the previously baseline of naïve usage of cuBLAS; the number of GFLOPS did not not exceed about 50, regardless of the degree of sparsity. Our kernels
typically performed one or two orders of magnitude faster in terms of GFLOPS.
3.2 Effect of block size, features axis and hidden state size
We benchmarked the performance of our kernels, in terms of GFLOPS, as a function block size, features axis and hidden state size; see figure 4. In each experiment, we kept the total number of
parameters fixed. This experiment was performed on a NVIDIA P100 GPU, with a small-world LSTM with about 3 million parameters, with a hidden state size ranging from 1792 to 10752, corresponding to 0%
to 97% sparsity. The evaluation was done with a minibatch size of 64; this
Figure 4: (a-c): Performance of elementary operations in our proposed kernels in terms of GFLOPS as a function of block size, feature axis and the hidden state size. See section 3.
size often performs best due to reduced cache dilution compared to larger minibatch sizes. The connectivity pattern is generated with the Watts-Strogatz algorithm, with 20% random long range
connections, but performance with Barabási-Albert connectivity is close.
The operation with feature_axis=1 corresponds to an assembly-optimized kernel, and is essentially the same kernel as the openai-gemm kernel, now also used in cuBLAS for tile size 32×32. This kernel
clearly outperforms the kernel for feature axis 0, but does not work for Kepler and Volta GPUs.
Figure 4a shows performance of the fprop/bprop operation, which compute forward activations, and compute gradients w.r.t. the forward activations respectively. Since bprop is the transpose of the
fprop operation, and transposes can be done in-place, the two operations have identical performance. Note that, perhaps somewhat surprisingly, in this experiment a higher degree of sparsity generally
leads to better performance in GFLOPS. This is due to the higher parallelizability of the accumulation operation in case of larger hidden state sizes.
In figures 4b and 4c we benchmarked the update operation, which computes derivatives w.r.t. the block-sparse weights. Note that if a weight matrix is re-used multiple times in a computational graph,
such as in an LSTM, this operation can be grouped, i.e. performed in parallel across multiple timesteps. Comparing figure 4b with 4c, it is clear that the grouping leads to substantial improvement in
GFLOPS. Grouped update operations are easy to perform, since the kernels take lists of (activation, gradients) as input, avoiding the requirement of pre-concatenation.
4 Experiments
Our new kernels open up a large space of new possibilities for learning algorithms and neural network
architectures. In this section ,we experiment with some relatively simple block-sparse patterns with
fixed sparse topology. Note that the kernels in principle allow for much more exotic methods, such as
a learned sparsity structure, and more exotic models; we leave this for future work.
4.1 LSTMs with Deep Updates
Figure 5: In experiments, we use an LSTM architecture with deep updates (right), where all linear operations have block-sparse connectivity with small-world topology. The architecture can be seen as
an evolution of the multiplicative LSTM (left) [Krause et al., 2016].
The block-sparse kernels allow us to efficiently implement LSTMs with block-sparsity connectivity. With sparse connectivity, we mean that the linear operations in a LSTM are replaced by block-sparse
linearoperations. Sparseconnectivityallowsusto,forexample,widenthenetworkwithoutincreasing the number of parameters.
One appealing property of LSTMs with densely connected recurrent connections is that the value of each activation (neuron) at timestep t is a function of all hidden activations, and all inputs, at
timestep (t 1). In other words, information fully mixes between timesteps. This is not true for a LSTM with naïve block-sparse connectivity: since not all neurons are directly connected, information
would not fully mix between timesteps. A similar observation was made in [Wen et al., 2017] and motivated their structured sparsity approach.
We choose to tackle this problem with the introduction of a sparse multi-layer network within each LSTM cell; see figure 5 for an illustration, and algorithm 1 in the appendix for pseudo-code of this
LSTM with deep updates. Given the right block-sparse connectivity, such networks can fully mix within a relatively small number of layers when combined with a small-world connectivity, as we will
explain in section4.2. The LSTM architecture builds on the multiplicative LSTM (mSLTM) architecture proposed in [Krause et al., 2016], and utilizes layer normalization [Ba et al., 2016] for improved
Adding internal depth is a good way to increase parameter efficiency, even in the dense case; see figure 8 in the appendix. Performance seems to saturate after about 4 internal steps.
4.2 Small-World Networks
In the choice of sparse connectivity, we take inspiration from the field of small-world networks, as we will now explain. A network is defined to be a small-world network [Watts, 1999] if the following
two properties hold.
Figure 6: Visualization of adjacency matrices of random, Watts-Strogatz (WS) with a 1-Dimensional ring lattice, and Barabási-Albert (BA) networks. The latter two correspond to small-world networks,
defined as having both a large clustering coefficient, and short average path length (see section 4.2). The purely random network, shown left, is not a small-world network, as it lacks clustering.
1. The clustering coefficient is not small. The clustering coefficient is a measure of locality of connectivity, and is high when the graph contains many cliques or near-cliques: subnet-works that are
almost fully interconnected. Note that RNNs constructed with block-sparse connectivity automatically have this property.
2. The average path length L between nodes (neurons) scales only logarithmically with the total number of nodes/neurons N, i.e. L / logN. The path length between nodes equals the length of the
shortest path between these nodes. A short average path length leads to rapid mixing of information.
A well-known example of small-world networks is the human social network [Watts, 2004]: our friends, family and acquaintances often also know eachother (high clustering), but we are also on average
only about ’six handshakes away’ from a random other person on earth. Another example is the human brain, whose anatomical and functional networks often also exhibit small-worldness [Bassett and
Bullmore, 2006].
Some algorithms for generating graphs with small-world properties are:
1. Random block-sparse connectivity. Note that a plain block-sparse structure with non-trivially sized blocks, automatically has some degree of clustering. This results in the simplest type of
small-world RNN.
2. The Watts-Strogatz (WS) model [Watts and Strogatz, 1998]. The algorithm that construct WSgraphsstartsoutwithaK-dimensionalringlatticewithadensepurelylocalconnectivity; every node is connected to
every other node within a certain distance within the lattice. Then a random subset (k%) of all connections is replaced with a random connection. The other (100 k)% local connections are retained.
3. The Barabási-Albert (BA) model [Barabási and Albert, 1999]. The algorithm that constructs such graphs begins with an initial densely connected network with m0 nodes. Then, new nodes are added on
at a time, each new node connected to m m0 existing nodes, where the probability that a new node connects to a particular existing node i is proportional to the degree (the number of connections) of
that existing node. This leads to a power law degree distribution, with a very small number of nodes with a very large degree of connectivity (’the rich get richer’).
Our block-sparse GPU kernels allow us to implement models with WS and BA connectivity on the block level. See figure 6 for an illustration of the types of connectivity.
Neural networks with small-world connectivity allow us to train wider networks without incurring a quadratic increase in the number of parameters. This let us scale RNNs to very large states. Since
the average path length between neurons scales only as the logarithm of the total number of neurons, even very large networks are fully connected within a short number of timesteps. As a result,
information has the ability to mix rapidly within the network.
4.3 Small-World LSTMs
We trained LSTMs with deep updates and small-world block-sparse connectivity, which we refer to as Small-World LSTMs. For a large scale experiment we follow the setup of [Radford et al., 2017] and
train byte-level generative models on the Amazon Reviews corpus [McAuley et al., 2015]. Due to more efficient implementations, models are trained for four epochs instead of only one. Batch size is
also increased by 4x resulting in an equivalent amount of total updates. As in other experiments, we train models with nearly equivalent parameter counts and the same hyper-parameters, comparing
dense weight matrices with a block-sparse variant. A dense model with a state size of 3168 is trained. The sparse model uses a Barabási-Albert connectivity pattern with an effective sparsity of ~97%
and a state size of 18432. The dense model reaches 1.058 bits per byte – already a significant improvement over the 1.12 bits per byte previously reported on this dataset. The sparse model improves
this further, reaching 1.048 bits per byte.
4.4 Binary Sentiment Representation Learning
Inadditiontothegenerativeresultsabove,wealsocomparetheusefulnessofthesemodelsforthetask of binary sentiment classification in figure 7. We follow the semi-supervised methodology established by [Kiros
et al., 2015] which trains a task-specific supervised linear model on top of a more general purpose unsupervised representation. Due in part to the much higher feature dimensionality of the sparse
model increasing the effective capacity of the linear model, it outperforms the dense model on all sentiment datasets. Of particular note, our sparse model improves the state of the art on the
document level IMDB dataset from 5:91% error [Miyato et al., 2016] to 5:01%. This is a promising improvement compared to [Radford et al., 2017] which performed best only on shorter sentence level
Figure 7: Binary sentiment classification error (%) of linear models trained on the features of dense and sparse generative models with approximately equivalent total parameter counts. SOTA for the
Customer Reviews and Stanford Sentiment Treebank datasets from [Radford et al., 2017], for IMDB [Miyato et al., 2016], for Yelp [Johnson and Zhang, 2017].
4.5 Block-Sparse Convolutional Networks
Finally, we tested whether replacement of dense (regular) convolutional kernels with block-sparse kernels improves results in a generative modeling benchmark. In order to maximize fairness of
comparison, we took a pre-existing implementation of a SOTA model, and kept all hyper-parameters (including those for optimization) unchanged, while sparsifying and deepening the model such that the
total number of parameters is approximately unchanged. Specifically, we took the openly available implementation of the PixelCNN++[Salimansetal.,2017] generative model, and replaced the regular
convolutions with block-sparse convolution with a block-diagonal structure. This is also known as grouped convolution [Zhang et al., 2017, Xie et al., 2016]. Similar to [Zhang et al., 2017], we added
a shuffle operator after every block-sparse convolution. In our case, shuffling has no additional computational or memory cost, since it can be merged with the convolution operation, simply by doing
shuffling of the sparsity structure.
We found that increasing the depth of each stage of the model by a factor 2 or 4, while increasing the sparsity so as to keep the total number of parameters approximately constant, leads to
increasingly better performance in terms of the bits per dimension (bpd). With an increase of the depth by a factor 4, this resulted in 2.90 bpd, which is (to the best of our knowledge) the best
reported number in the literature so far.
These results are consistent with findings by [Zhang et al., 2017] and [Xie et al., 2016], who found that similarly grouped convolution led to improvements in classification error in supervised tasks.
For extensive experimental evidence that block-sparse convolutions can significantly improve results, we refer to these previous publications.
5 Related Work
There is extensive evidence in the literature that architectures with block-sparse linear operations can substantially improve results.
In [Xie et al., 2016], for example, a novel convolutional network architecture called ResNeXt was proposed using block-sparse convolutions, improving upon the state-of-the-art. Later, the ShuffleNet
[Zhang et al., 2017] used block-sparse convolutions as well, this time in combination with a shuffle operation, like we did in our convolutional experiments. These architecture can potentially greatly
benefit from efficient block-sparse GPU kernels.
Depthwise separable convolutions [Simonyan and Zisserman, 2014] can be viewed as block-sparse convolutions with a block-diagonal connectivity matrix, and block size 1. Depthwise separable
convolutions have been used to advance the state of the art in image classification in various pub-lications, starting in [Simonyan and Zisserman, 2014], later in the Xception [Chollet, 2016] and
MobileNet [Howard et al., 2017] architectures.
One of our motivations for block-sparsity in RNNs; larger state sizes without significantly increased computational or parameter costs, has been successfully addressed in numerous ways by other
approaches. The projection LSTM of [Sak et al., 2014] allows for more efficient usage of high dimensional states by reducing the number of parameters and computation in the hidden to hidden transition
of an LSTM and has been scaled up to achieve state of the art results in large scale language modeling [Jozefowiczetal.,2016]. Thisworkhasbeenfurtherimprovedby [KuchaievandGinsburg, 2017] who explore
additional factorization methods for LSTMs.
In contrast to our small-world LSTM approach which trains with a fixed block-level sparsity pattern, more flexible methods which learn structured sparsity in RNNs have been explored. Group lasso
regularization [Yuan and Lin, 2006] is a popular technique which by appropriate definition of a group encourages structured sparsity. Closely related to our work, [Narang et al., 2017] used carefully
scheduled thresholding and group lasso on blocks of weight matrices to learn block-sparse RNNs. [Wen et al., 2017] achieved unit level pruning via group lasso regularization of “Intrinsic Sparse
Structure weight groups” – which correspond to the rows and columns of LSTM weight matrices for a specific unit.
Wearefarfromthefirsttoapplyinternalstepsinarecurrentneuralnetwork. In[Zillyetal.,2016]and [Graves, 2016], for example, RNNs were trained with multiple internal steps per external timestep.
6 Research Directions
There remain a large number of unexplored research directions and potential applications of the block-sparse kernels. Here we list some open questions and suggestions for future research.
Often, a large percentage of the weights in neural networks can be pruned after training has finished, as shown by various recent work summarized in [Cheng et al., 2017]. Typically these results
could not be translated into wall-clock speedups, since there was an absence of GPU kernels that could leverage sparsity. How much wall-clock time speed-up is possible at inference time, when using
block-wise pruning of weights, together with block-sparse kernels?
In biological brains, the sparse structure of the network is partially determined during development, in addition to connection strengths. Can we do something similar in artificial neural networks,
where we use gradients to not only learn the connection weights, but also the optimal sparsity structure? A recent paper proposed a method for learning block-sparse recurrent neural networks [Narang
et al., 2017], and we recently proposed an algorithm for L0 regularization in neural networks [Louizos et al., 2017], which can be used towards this end.
We trained LSTMs with tens of thousands of hidden units, leading to better models of text. More generally, sparse layers make it possible to train models with huge weight matrices but the same number
of parameters and the same computational cost as their smaller dense counterparts. What are application domains where this will make the most difference to performance?
7 Conclusion
We released highly optimized GPU kernels for gradient-based learning and inference in neural networks with block-sparse weights. In benchmarking experiments, we found that our GPU kernels indeed work
much more efficiently than alternative kernels that are not optimized for block-sparse weights. We use the kernels to implement small-world LSTMs, which allow us to scale up to much wider states than
typically used in LSTMs. We compared the representations (learned generatively on Amazon reviews data) of a dense network [Radford et al., 2017] with the wider and sparse variant, in terms of their
usefulness for classifying sentiment. We found that the wider state indeed helped identify sentiment, leading to state-of-the-art results on various sentiment classification benchmarks. The
bits-per-character results on the Amazon reviews data set are also the best reported in the literature so far. We also saw improvements in the bits-per-dimension performance in generative modeling of
CIFAR-10, when using sparse layers. Much is left to be explored in the space of block-sparse neural networks, and we have listed some potentially fruitful directions for future research.
Acknowledgments. We would like to thank Nvidia Corporation for their generous gift of a DGX-1 GPU machine, which was crucial for training our large scale block-sparse LSTMs. We would also like to
thank Jack Clark, Jonas Schneider, Greg Brockman, Ilya Sutskever and Erich Elsen for their help leading up to this release.
Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machine learning on
heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016.
Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
Albert-László Barabási and Réka Albert. Emergence of scaling in random networks. science, 286 (5439):509–512, 1999.
Danielle Smith Bassett and ED Bullmore. Small-world brain networks. The neuroscientist, 12(6): 512–523, 2006.
Yu Cheng, Duo Wang, Pan Zhou, and Tao Zhang. A survey of model compression and acceleration for deep neural networks. arXiv preprint arXiv:1710.09282, 2017.
François Chollet. Xception: Deep learning with depthwise separable convolutions. arXiv preprint arXiv:1610.02357, 2016.
Alex Graves. Adaptive computation time for recurrent neural networks. arXiv preprint arXiv:1603.08983, 2016.
Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision
applications. arXiv preprint arXiv:1704.04861, 2017.
Rie Johnson and Tong Zhang. Deep pyramid convolutional neural networks for text categorization. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1:
Long Papers), volume 1, pages 562–570, 2017.
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410, 2016.
Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. Skip-thought vectors. In Advances in neural information processing systems, pages
3294–3302, 2015.
Ben Krause, Liang Lu, Iain Murray, and Steve Renals. Multiplicative lstm for sequence modelling. arXiv preprint arXiv:1609.07959, 2016.
Oleksii Kuchaiev and Boris Ginsburg. Factorization tricks for lstm networks. arXiv preprint arXiv:1703.10722, 2017.
Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436–444, 2015.
Christos Louizos, Max Welling, and Diederik P. Kingma. Learning sparse neural networks through l0 regularization. arXiv preprint arXiv:1712.01312, 2017.
Julian McAuley, Rahul Pandey, and Jure Leskovec. Inferring networks of substitutable and com-plementary products. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery
and Data Mining, pages 785–794. ACM, 2015.
TakeruMiyato,AndrewMDai,andIanGoodfellow. Adversarialtrainingmethodsforsemi-supervised text classification. arXiv preprint arXiv:1605.07725, 2016.
Sharan Narang, Eric Undersander, and Gregory Diamos. Block-sparse recurrent neural networks. arXiv preprint arXiv:1711.02782, 2017.
Alec Radford, Rafal Jozefowicz, and Ilya Sutskever. Learning to generate reviews and discovering sentiment. arXiv preprint arXiv:1704.01444, 2017.
Has¸im Sak, Andrew Senior, and Françoise Beaufays. Long short-term memory recurrent neural network architectures for large scale acoustic modeling. In Fifteenth Annual Conference of the International
Speech Communication Association, 2014.
Tim Salimans, Andrej Karpathy, Xi Chen, Diederik P Knigma, and Yaroslav Bulatov. Pixelcnn++: A pixelcnn implementation with discretized logistic mixture likelihood and other modifications. In
International Conference on Learning Representations (ICLR), 2017.
Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
Duncan J Watts. Small worlds: the dynamics of networks between order and randomness. Princeton university press, 1999.
Duncan J Watts. Six degrees: The science of a connected age. WW Norton & Company, 2004.
Duncan J Watts and Steven H Strogatz. Collective dynamics of’small-world’networks. nature, 393 (6684):440, 1998.
Wei Wen, Yuxiong He, Samyam Rajbhandari, Wenhan Wang, Fang Liu, Bin Hu, Yiran Chen, and Hai Li. Learning intrinsic sparse structures within long short-term memory. arXiv preprint arXiv:1709.05027,
Saining Xie, Ross Girshick, Piotr Dollár, Zhuowen Tu, and Kaiming He. Aggregated residual transformations for deep neural networks. arXiv preprint arXiv:1611.05431, 2016.
Ming Yuan and Yi Lin. Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 68(1):49–67, 2006.
Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, and Jian Sun. Shufflenet: An extremely efficient convolutional neural network for mobile devices. arXiv preprint arXiv:1707.01083, 2017.
Julian Georg Zilly, Rupesh Kumar Srivastava, Jan Koutník, and Jürgen Schmidhuber. Recurrent highway networks. arXiv preprint arXiv:1607.03474, 2016.
A Appendix
Figure 8: Training curves of our LSTM with deep updates and dense connectivity, as a function of the internal depth and the network width. The network width was chosen in order to closely match the
number of parameters of the network with an internal depth of 2. | {"url":"https://customaiintegrations.com/gpu-kernels-for-block-sparse-weights/","timestamp":"2024-11-13T06:27:11Z","content_type":"text/html","content_length":"122299","record_id":"<urn:uuid:2f1371f1-c1f2-4d51-9e0d-1278aee8063a>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00781.warc.gz"} |
Full-Length 7th Grade PSSA Math Practice Test-Answers and Explanations
Did you take the 7th Grade PSSA Math Practice Test? If so, then it’s time to review your results to see where you went wrong and what areas you need to improve.
7th Grade PSSA Math Practice Test Answers and Explanations
1- Choice C is correct.
If the score of Mia was 90, then the score of Ava is 30. Since the score of Emma was one and a half as that of Ava, therefore, the score of Emma is 1.5 × 30 = 45.
2- Choice A is correct
Write the ratio and solve for \(x\).
\( \frac{60}{50}=\frac{5x+2}{10}⇒ 12=5x+2 ⇒12-2=5x⇒ x=\frac{10}{5}=2\)
3- Choice B is correct
Let \(x\) be the number of students in the class. \(40\%\) of \(x\) = girls, \(25\%\) of girls = tennis player,
Find \(25\%\) of \(40\%\). Then: \(25\%\) of \(40\%=0.25×0.40=0.1=10\%\) or \(\frac{10}{100}=\frac{1}{10}\)
4- Choice C is correct
Use the information provided in the question to draw the shape.
Use Pythagorean Theorem: \(a^2+b^2=c^2\)
\(30^2+40^2=c^2⇒ 900+1600= c^2⇒2500= c^2⇒c=50\)
5- Choice A is correct
Write a proportion and solve for \(x\).
\( \frac{12 \space Cans}{$ 7.40}=\frac{30 \space Cans}{x }, x= \frac{7.40×30}{12} ⇒x=$18.5\)
The Absolute Best Book to Ace the 7th Grade PSSA Math Test
Original price was: $23.99.Current price is: $18.99.
6- Choice D is correct
Use the volume of square pyramid formula.
\(V= \frac{1}{3} a^2 h ⇒V=\frac{1}{3} (12 \space m)^2×20 \space m ⇒ V=960 \space m^3\)
7- Choice C is correct
Let \(x\) be the number of soft drinks for 240 guests. Write a proportional ratio to find \(x\). \(\frac{6 \space soft \space drinks}{8 \space guests}=\frac{x}{240 \space guests}, x=\frac{240×6}{8}⇒x
8- Choice B is correct
Use the formula for Percent of Change: \(\frac{New \space Value-Old \space Value}{Old \space Value}×100\%, \frac{1.75-1.4}{1.4}×100\%=25\%\)
9- The answer is: \(-99\)
Use PEMDAS (order of operation):
10- Choice D is correct
Simplify. \(5x^2 y(2xy^3)^4=5x^2 y(16x^4 y^{12} )=80x^6 y^{13}\)
Best 7th Grade PSSA Math Prep Resource for 2022
Original price was: $18.99.Current price is: $13.99.
11- Choice C is correct
The distance between Jason and Joe is 14 miles. Jason running at 6 miles per hour and Joe is running at the speed of 8 miles per hour. Therefore, every hour the distance is 2 miles less.
14 ÷ 2 = 7
12- Choice A is correct.
Let x be the integer. Then: \(5x-9=101\), Add 9 both sides: \(5x=110\), Divide both sides by 5: \(x=22\)
13- Choice D is correct
Two and half times of 18,000 is 45,000. One-fifth of them canceled their tickets.
One sixth of \(45,000\) equals \(9,000(\frac{1}{5} × 45000=9000)\).
\(36,000(45000-9000=36000)\) fans are attending this week
14- Choice C is correct
Write the numbers in order: \(25,12,13,18,22,36,22\)
Since we have 7 numbers (7 is odd), then the median is the number in the middle, which is 22.
15- Choice D is correct.
The question is: 615 is what percent of 820?
Use percent formula: \(part=\frac{percent}{100}×whole\)
\(615=\frac{percent}{100}×820 ⇒ 615=\frac{percent ×820}{100}⇒61,500=percent×820\) ⇒
\(percent=\frac{61,500}{820}=75\), \(615\) is \(75\%\) of \(820\). Therefore, the discount is: \(100\%-75\%=25\%\)
16- The answer is \(22 \frac{1}{3}\) miles.
Robert runs \(4 \frac{1}{3}\) miles on Saturday and \(2(4 \frac{1}{3})\) miles on Monday and Wednesday.
Robert wants to run a total of 35 miles this week. Therefore, subtract 4 \(\frac{1}{3}+2(4 \frac{1}{3})\) from 35.
\(35-(4 \frac{1}{3}+2(4 \frac{1}{3} ))=35-12 \frac{2}{3}=22 \frac{1}{3}\) miles
17- Choice B is correct
To find the area of the shaded region, find the difference of the area of two circles. \(S_1\): the area of bigger circle. \(S_2\): the area of the smaller circle). Use the area of circle formula. \
\(S_1- S_2=π(6 \space cm)^2- π(4 \space cm)^2⇒S_1- S_2=36π \space cm^2-16π \space cm^2 ⇒ S_1- S_2 =20π \space cm^2\)
18- Choice A is correct
Use Pythagorean Theorem: \(a^2+b^2=c^2\),
\(12^2+5^2=c^2⇒ 144+25= c^2 ⇒ c^2=169 ⇒c=13\)
19- Choice A is correct
Let L be the price of the laptop and C be the price of the computer. 4(L) =7(C) and L = $240 + C
Therefore, 4($240 + C) =7C ⇒ $960 + 4C = 7C ⇒ C=$320
20- The answer is 70.
Jason needs an \(75\%\) average to pass five exams. Therefore, the sum of 5 exams must be at least \(5×75=375\), The sum of 4 exams is \(62+73+82+88=305\).
The minimum score Jason can earn on his fifth and final test to pass is:
\( 375-305=70\)
The Best Books to Ace the 7th Grade PSSA Math Test
Original price was: $23.99.Current price is: $18.99.
Original price was: $18.99.Current price is: $13.99.
Related to This Article
What people say about "Full-Length 7th Grade PSSA Math Practice Test-Answers and Explanations - Effortless Math: We Help Students Learn to LOVE Mathematics"?
No one replied yet. | {"url":"https://www.effortlessmath.com/math-topics/full-length-pssa-7-math-practice-test-answers-and-explanations/","timestamp":"2024-11-12T16:31:58Z","content_type":"text/html","content_length":"81044","record_id":"<urn:uuid:22bb3e4b-523c-47c9-9483-e1d8c95ccf90>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00788.warc.gz"} |
Radiation measurement
With respect to the actual monitoring purpose the relevant sources of ionizing radiation are substances which are subject to spontaneous decay. This is the case with isotopes with instable atomic
nuclei as opposed to their stable variants. Spontaneously their nuclei split into smaller nuclei (mostly unstable as well). The decay goes along with the emission of particles—Helium nuclei (alpha
radiation) or electrons (beta radiation)—and/or energy quanta (gamma radiation).
Radiation detectors
Ionizing radiation can be detected by utilizing its ionizing effects. Best known, and widely applied, is the Geiger-Müller counter tube: A hermetically closed glass cylinder is filled with a noble
gas (Argon or Xenon). The inner surface of the cylinder is coated with an electrically conductive layer. A thin metallic wire is mounted, isolated from the coating, in the middle axis of the
cylinder. The wire and the conducting coating will be used as electrodes. An atom of the gas filling, that will be hit by a particle or quantum irradiated by radioactive materials in the ambience,
will be ionized, i.e. the gas atom will be split into a positive ion and an electron. Let us assume that we apply a high voltage of e.g. 500V DC between the electrodes. Due to the electrical field
between the electrodes the electrons will move towards the positive electrode while the (positive) ions will move towards the negative electrode where they will recombine with electrons. For every
split and recombined atom one electron has to pass the outside circuit thus generating an elementary current impulse.
Other detection mechanisms utilize ionizing effects in the semiconductor layers of PIN diodes, or they make use of scintillators. In a scintillator crystal high-energy quanta are converted into
avalanches of photons which are captured by a photo detector (e.g. a photo diode).
Technical realisation
The frequency of events which cause electrical impulses can be taken as a measure for the intensity rate of radiation emitted in the abbience of the detector. Hence the task is to count, within an
appropriate time interval, the impulses generated by the detector (which explains the term “counter tube”). The choice of the time interval depends on the purpose of the measuring task since the
generation of impulses is subject to a stochastic process, i.e. a case of likelihood.
Radiation levels are stochastic quantities
The likelihood of the event of an eventual impulse depends on several processes. The likelihood chain starts with the random decay of an atom of the radioactive isotope. The released particle or
quantum resp will pass the detector space only accidentally since it could be ‘shot’ into any spacial direction. Finally, not every particle or quantum passing the detector space will cause an
interaction which results in an electrical impulse. All those probabilities are to be multiplied thus leading to a random number of events counted within a given time interval even at a constant
exposure rate.
The distribution of counter results is described by means of a probability-density function. The Poisson distribution underlying this type of random process can be approximated by the well-known
Gaussian distribution. Its maximum represents the mean value of a large number of counter results at constant exposure rate. The segments of the area below the curve represent the number of results
falling into the respective segment.
A Gaussian distribution is characterized by its maximum (aka ‘expectation’) and the standard deviation σ represented, in above figure, by the (equal) width of segments. Due to the underlying Poisson
distribution the standard deviation is given, in the actual case, by the square root of the expectation. Let us give a numerical example:
• Let the mean number of events counted within an interval of 15 minutes be 100 (which approximates the conditions implemented by the TDRM sensor stations).
• The standard deviation σ then would be 10 counts or 10% of the mean value.
• Hence two thirds of results would lie within the interval from 90 to 110, another third would lie outside of the ±10% range.
• A narrowing of the standard deviation range to ±1% would require an extension of the counter interval by a factor of 100 (!), hence 25 hours.
• The standard deviation range would shrink also with increasing intensity of radiation. An (abnormal) increase by a factor of 10 would result in a decrease of the standard deviation range to ±3%.
The numerical example illustrates the inevitable trade-off between sensitivity and response time associated with the measurement of stochastic phenomena.
We decided to set the counter interval to 15 minutes in favor of a fast response. With respect to the purpose of the monitoring network we take it for most important to indicate irregular situations
without delay rather than to resolve it to the finest degree. Every minute a TDRM sensor stations delivers the result of events counted within a ‘sliding window’ of the past 15 minutes. Hence trends
can already be estimated after a couple of minutes.
Typical time diagram of measurements: variations of the radiation level are of stochastic origin rather than fluctuations of the dose rate of the ambient atmosphere | {"url":"https://tdrm.fiff.de/index.php/en/technology-en/radiation-measurement-en","timestamp":"2024-11-02T20:49:49Z","content_type":"text/html","content_length":"101840","record_id":"<urn:uuid:7369034c-b1c4-4c59-849a-492602e2a86f>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00252.warc.gz"} |
Mirzakhani and Meanders
Laureates of mathematics and computer science meet the next generation
As part of the panel event which took place as part of this year’s HLF on the life and influence of Mayam Mirzakhani, we heard from several people who have found her inspirational. Two of the young
researchers shared their experiences in following in her lead, to try to ‘make a footprint in the field’; we also heard from Andrea Vera Gajardo, organiser of the May 12 Initiative which celebrates
women in mathematics every year on Maryam Mirzakhani’s birthday; and Hélène Barcelo, from the Mathematical Science Research Institute, spoke about the chair named in Mirzakhani’s honour. It was an
impressive collection of stories showing her wide-reaching impact as a role model.
But the session also included an insight into Mirzakhani’s impact as a mathematician – Vincent Delecroix, a researcher at LaBRI in Bordeaux, studies dynamical systems and curves on surfaces, and
described a recent problem he and his colleagues worked on which brought them into Maryam Mirzakhani’s mathematical world.
The work concerned studying a particular type of curve called a ‘meander’. Given a horizontal line dividing a space in two, a meander is a single closed curve which goes above and below the line, and
is made up of semicircular arcs of different sizes.
A meander with 12 crossings.
The number of times the meander crosses the line can be any even number from two upwards, and one problem Delecroix has been working on is enumeration of these curves – for a given number of
crossings, how many different possible meanders are there?
With two crossings, there is only one possible meander (a circle), and for four there are two, shown above left. For 6 crossings there are 8 possible meanders (shown above right), and for 8 crossings
there are 42. The number of possible meanders of each order (an order N meander has 2N crossings) has been counted for smaller numbers of crossings (it’s known for up to 84 crossings), but in general
there’s no currently known simple formula for calculating these numbers. The number of meanders of order N is denoted M[N], and researchers are trying to understand the behaviour of this sequence –
how fast does it grow?
These curves can be considered in their literal interpretation – if you’re building a road across land with a river on it, the river might cross the road, and depending on the shape of the river and
the placement of the road, it will cross the road a different number of times. But it’s also a more theoretical question – in mathematics, the surface you’re building a road on might not be
two-dimensional, and might use a different type of geometry. Topologists study the number of possible curves on a shape as a way of understanding its structure, and enumerative geometry considers
these kinds of questions.
We can vary the definition of a meander in a few different ways – for example, we’re considering closed curves here, but an ‘open meander’ is one where the two ends of the curve don’t join up, and
these can have odd numbers of crossings. You can also consider the line you’re crossing to be on the surface of an object like a sphere – in which case the question becomes more like, how many rivers
cross the equator 2N times? In this case, we consider the left-hand crossing to be directly next to the right-hand crossing, as the line wraps around and connects back to itself.
Some of the arches making up the meander are small ones which connect a crossing to the one directly next to it (this includes arches connecting the two ends, if you’re wrapped around a sphere). Such
an arch is called a minimal arch, and this is another aspect of meanders which can be studied – how many meanders of order N (with 2N crossings) have K minimal arches? The answer is denoted M[(N,K)],
and this is a particular subject of study for Delecroix and his collaborators.
L: 12 crossings and 8 minimal arches; R: 12 crossings and 5 minimal arches.
In their paper, ‘Enumeration of Meanders and Masur-Veech Volumes’, they give a formula for counting the number of meanders with less than a given number of crossings, which gives an indication of how
this number will grow. The formula is given here, although Delecroix admits ‘it’s not so nice’:
This is where the connection to Maryam Mirzakhani becomes apparent: she also worked on counting the number of possible curves on surfaces – although in her case she was studying hyperbolic spaces.
These involve a different definition of what a straight line is, and involve ‘cusps’ – points in the space where the geometry is distorted and behaves strangely. Mirzakhani studied simple closed
curves – which join back up without crossing themselves – and multi-curves (collections of curves).
Mirzakhani proved a theorem counting the number of possible curves, and determining how it changes as you increase the length of the curves. As above, the formulae were generalisations of how quickly
the numbers grow as the length of the curve increases to a limit – but if you look at the formula, you might notice some common elements, highlighted in colour here:
“Counting curves and counting meanders are actually very similar,” Delecroix explains – “[you can] see a meander as a pair of simple closed curves on a surface; the minimal arches are very much
related to the cusps. You can use Mirzakhani’s theorem to prove our result – basically the two statements are equivalent.”
Mathematics often involves using theorems and results proved by other mathematicians – it’s one of the great strengths of a subject in which truth can be considered universal. In this case the
connection between the two different areas means that Delecroix and his collaborators can now use tools Mirzakhani developed in continuing this research – it’s a part of her legacy, as for anyone who
adds to our collective knowledge by research in maths.
1 comment
1. Mathematics can be seen as a game. The more moves a player “sees”, the better he is. The same applies to mathematics. The more ways a mathematician “sees” evidence from different subfields, the
better she is. Mirzakhani not only provided new proofs, but also new views – views that can open the eyes of other mathematicians. Perhaps the most illuminating work in mathematics is the work
that uses cross-cutting concepts. | {"url":"https://scilogs.spektrum.de/hlf/mirzakhani-and-meanders/","timestamp":"2024-11-03T23:41:30Z","content_type":"text/html","content_length":"81642","record_id":"<urn:uuid:83c94b92-34f0-489c-88cc-29b04a01a12b>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00047.warc.gz"} |
Roots worksheets
Square Roots and Cube Roots
Square Roots and Cube Roots
Square Roots and Cube Roots
Roots, Prefixes, & Suffixes
Chapter 3 : Squares, Square Roots, Cubes and Cube Roots.
Explore Worksheets by Subjects
Explore printable Roots worksheets
Roots worksheets are an essential tool for teachers looking to strengthen their students' math and number sense skills. These worksheets provide a variety of exercises and problems that focus on
understanding the concept of roots, including square roots, cube roots, and higher-order roots. By incorporating roots worksheets into their lesson plans, teachers can help students build a strong
foundation in math, develop their number sense, and enhance their problem-solving abilities. Additionally, these worksheets can be tailored to suit the needs of students across different grade
levels, ensuring that each learner receives the appropriate level of challenge and support. With roots worksheets, teachers can create engaging and effective math lessons that foster a deep
understanding of number sense and mathematical concepts.
Quizizz offers a comprehensive collection of roots worksheets, along with other math resources, to help teachers create dynamic and interactive learning experiences for their students. By integrating
Quizizz into their lesson plans, teachers can access a wide range of high-quality worksheets, quizzes, and games that target essential math skills, such as number sense, algebra, geometry, and more.
These resources are designed to be both engaging and educational, ensuring that students remain motivated and focused as they develop their math abilities. Furthermore, Quizizz allows teachers to
track their students' progress and performance, enabling them to identify areas of strength and weakness and adjust their instruction accordingly. With Quizizz, teachers can transform their math
lessons into exciting and effective learning experiences that promote a deep understanding of number sense and other critical mathematical concepts. | {"url":"https://quizizz.com/en/math-roots-worksheets","timestamp":"2024-11-10T02:26:20Z","content_type":"text/html","content_length":"154927","record_id":"<urn:uuid:21706885-e5c2-4b07-9af5-a1153d262758>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00516.warc.gz"} |
Extended odd Frechet half logistic distribution with it’s properties and applications - World Scientific News
Lifetime distributions describe the behavior of the length of life of individuals or components in survival or reliability analyses. They are important tools in modeling the different characteristics
of lifetime data sets emanating from various fields of human endeavor. Many lifetime distributions exist in the statistical literature but are commonly characterized with having many parameters which
may cause estimation related problems. To trade-off between simplicity and flexibility in modeling lifetime data sets with half logistic distribution, a new extension is proposed in this paper by
using the extended odd Frechet-G family of distributions. The new distribution has only two parameters and simple mathematical form that can be interpreted in terms of odds ratio. The statistical
properties of the distribution, including moments, quantile function and order statistics are studied. The unknown parameters were estimated by using two different estimation methods, namely, maximum
likelihood and maximum product of spacing. Monte Carlo simulation study is undertaken to compare the finite sample performance of these parameter estimation methods based on generated samples using
the quantile function of the new distribution. To demonstrate suitability in favor of the proposed distribution, three real data sets were analyzed and compared with four competitive models, two from
the extended odd Frechet-G family and the remaining two having the same baseline distribution as the proposed. Empirical findings show that the new two-parameter distribution compared well to the
four-parameter distributions of the same family and produced better results than the other extensions of half logistic distribution.
• Cordeiro, G. M., Alizadeh, M., & Diniz Marinho, P. R. (2016). The type I half-logistic family of distributions. Journal of Statistical Computation and Simulation, 86(4), 707-728.
• Jackson, D. L. (2007). The effect of the number of observations per parameter in misspecified confirmatory factor analytic models. Structural Equation Modeling: A Multidisciplinary Journal, 14
(1), 48-76
• Nasiru, S. (2018). Extended odd Fréchet-G family of distributions. Journal of Probability and Statistics 2018, Article ID 2931326
• Olapade, A. K. (2014). The type I generalized half logistic distribution. Journal of the Iranian Statistical Society, 13(1), 69-82.
• Proschan, F. (1963). Theoretical explanation of observed decreasing failure rate. Technometrics, 5(3), 375-383
• Seo, J. I., & Kang, S. B. (2015). Notes on the exponentiated half logistic distribution. Applied Mathematical Modelling 39 (1), 6491–6500
• Sirajo, M., Falgore, J. Y., Najmuddeen, M. S., & Umar, A. A. (2021). A New Reflected Minimax Distribution on a Bounded Domain: Theory and Application. Proceeding of the 5^th International
Conference of PSSN, 5(1)
• Sirajo, M., Shakil, M., & Ahsanullah, M. (2022). A Muth-Pareto Distribution: Properties, Estimation, Characterizations and Applications. Statistica, 82(3), 243-274
Download all article in PDF | {"url":"https://worldscientificnews.com/extended-odd-frechet-haft-logistic-distribytion-with-its-properties-and-applications/","timestamp":"2024-11-04T11:03:28Z","content_type":"text/html","content_length":"190005","record_id":"<urn:uuid:c3b5dd0e-e274-4f06-9144-0f2eb56d2d3e>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00198.warc.gz"} |
help PsychColorimetricData
help PsychColorimetric
This folder holds colorimetric data in .mat file form.
All data files are in a standard format.
A very useful source for on-line
colorimetric data is the CVRL database:
Many of the functions used here were downloaded from
that source and then splined to the wavelength sampling
used here (with extension by zeros).
B_xxx files contain basis functions. The basis functions themselves
are in the columns of a matrix with name B_xxx. There is also
a 3 by 1 row vector S_xxx that contains the wavelength sampling
information in the form [start delta numberSamples] where start
and delta are in nanometers.
den_xxx files contain optical density data. In log units. To
convert density values to transmittance, take 10^(-den). There
is also an S_xxx vector. Curiously, these are in column vectors
in the .mat files.
spd_xxx files contain spectral power distributions. Data are in
columns of matrix with name spd_xxx. There is also a S_xxx
sur_xxx files contain surface reflectance functions (range 0-1). Data
are in columns of matrix with name sur_xxx. There is also an S_xxx
T_xxx contain color matching functions or spectral sensitivities. Data
are in rows of a matrix with name T_xxx. There is also an S_xxx
vector. All are in energy units, so that multiplying by spectra in
energy (not quanta) gives desired result (often proportional to
isomerization rate).
Specific data files are listed below. Most, but not all, are sampled
between 380 nm and 780 nm at 5 nm intervals (S = [380 5 81]). This is
the CIE standard. Good coding practice requires using the S_xxx vector
loaded with the data and splining to the wavelength sampling you want to work
in. If I were to start this database again, I would have kept each function
at the resolution of its source.
In some cases, the original data were interpolated or extraploated (with zeros)
to put them data onto the CIE standard [380 5 81] wavelength.
See also: EnergyToQuanta, QuantaToEnergy, MakeItS, MakeItWls, MakeItStruct,
SplineSpd, SplineSrf, SplineCmf.
B_cieday - CIE daylight basis functions.
B_cohen - Cohens basis functions for Munsell surfaces.
B_monitor - Basis functions for a color monitor.
B_nickerson - Basis functions for Munsell surfaces.
B_roomillum - Basis functions for illuminants in Brainard’s room.
B_vrhel - Basis functions for Vrhel surface measurements.
den_lens_ws - Relative lens density data (re 700 nm). W&S, Table 1(2.4.6), p. 109.
- This is the first data set in the table, not the Norren and Vos
- data. It is for an open pupil.
den_lens_cie_1 - Part one of CIE component lens density function. CIE 170-1:2006, Table 6.10
den_lens_cie_2 - Part two of CIE component lens density function. CIE 170-1:2006, Table 6.10
den_lens_ssf - Stockman-Sharpe-Fach (1999) lens optical density spectrum.
- See CVRL database, CIE 170-1:2006, Table 6.10, 32 yo, pupil <= 3 degrees.
- This is also the sum of den_lens_cie_1 and den_lens_cie_2
den_mac_bone - Macular pigment density from Bone et al. (1992). See CVRL database, CIE 170-1:2006, Table 6.4, 2-deg.
den_mac_vos - Macular pigment density from Vos. See CVRL database.
den_mac_ws - Macular pigment density from W&S, Table 2(2.4.6), p. 112.
spd_appratusrel - Relative spectrum from a monitor. Used by IsomerizationInDishDemo.
spd_CIEA - Spectral power distribtion for CIE illuminant A.
spd_CIEC - Spectral power distribution for CIE illuminant C.
spd_D65 - Spectral power distribution for CIE illuminant D65.
spd_houser - 401 normalised illuminant spectral power distributions from:
Review of measures for light-source color rendition and considerations for a two-measure system for characterizing color rendition
Kevin W. Houser, Minchen Wei, Aurélien David, Michael R. Krames, and Xiangyou Sharon Shen
Optics Express, Vol. 21, Issue 8, pp. 10393-10411 (2013)
The mat file also contains a labels_houser variable, which is a cell array of string labels
for each spectrum.
spd_flourescent - Spectral power distribution for some flourescent lamp.
spd_incanCC - Spectral power distributions for Macbeth color checker patches under some incandescent lamp.
spd_phillybright - Direct bright sunlight measured through window and off of a piece of white paper towel
- on the floor of DB’s office in Philly, March 2013.
- Measurements made with PR-650, power in Watts/[m2-sr-wlband].
spd_xenonArc - Spectral power distribution for some xenon arc lamp.
spd_xenonFlash - Spectral power distribuiton for some xenon flash tube.
sur_koivisto - Koivisto reflectance measurements. Also has a labels variable.
- Converted from ASCII sourced from http://www.uef.fi/web/spectral/natural-colors
- using this script: https://github.com/da5nsy/Melanopsin_Computational/blob/4195492841471f943f62194d345269cbefcccec8/Auxiliary%20Scripts/loadKoivistoData.m
sur_krinov - Krinov reflectance measurements. Also has a labels variable.
- These were typed in by Larry Maloney long ago
- and put into PTB format by Danny Garside.
- See this script for code that extracted data
- from the text file as well as some comments about
- it: https://github.com/da5nsy/Melanopsin_Computational/blob/6a739e8d1e8c4e03a399b48727499407dddf6839/Auxiliary%20Scripts/Krinov_extract.m
sur_nickerson - The Nickerson measurements of the Munsell papers.
sur_macbeth - Reflectance of Macbeth color checker (not accurate, needs updating).
sur_vrhel - Reflectances measured by Vrhel.
T_CIE_Y2 - CIE physiologically relevant 2-degree luminosity function. See CVRL database.
T_CIE_Y10 - CIE physiologically relevant 10-degree luminosity function. See CVRL database.
T_cones_smj - Stockman-MacLeod-Johnson cone fundamentals. See CVRL database.
T_cones_smj10 - Stockman-MacLeod-Johnson 10-degree cone fundamentals. See CVRL database.
T_cones_ss2 - Stockman-Sharpe (2000) 2-degree cone fundamentals. Also the CIE 2006 fundamentals. See CVRL database.
T_cones_ss10 - Stockman-Sharpe (2000) 10-degree cone fundamentals. Also the CIE 2006 fundamentals. See CVRL database.
T_cones_sp - Smith-Pokorny cone fundamentals. Computed using PTB’s JuddVosToSmithPokorny. Each fundamental normalized to a max of 1.
T_cones_sp_orig - Original PTB version of Smith-Pokorny cone fundamentals. Specified between 380 and 780 nm,
- but non-zero only between 400 and 700 nm.
- This is probably because these were typed in by hand long ago from a table that only had data between 400 and 700 nm
- and then zero extended to match the wavelength sampling of other data files.
- It might be good to update these with data over the full specified range.
T_dogrec - Estimates of dog photoreceptor fundamentals. Order in file is L cone, S cone, rod.
T_DCS200 - Sensitivities of a Kodak DCS-200 color camera.
T_ground - Not entirely sure what this is, but it might be ground squirrel receptor sensitivities.
T_Lanom - Demarco et al. anomolous L cone sensitivity.
T_log10coneabsorbance_ss - Stockman-Sharpe (2000) log10 LMS cone photopigment absorbance.
- See CVRL database, CIE 170-1:2006, Table 6.6.
- Some S-cone values were unspecified for wls > 615 nm in the table.
- These were filled in here by linear extrapolation.
- Note that you want to raise 10 to these numbers
- to get absorbance, which itself is a log-like quantity.
T_Manom - Demarco et al. anomolous M cone sensitivity.
T_photopigments_ss - Removed. Use T_log10coneabsorbance and raise 10 to it.
T_melanopsin - Melanopsin fundamental as provided by Lucas at
- http://lucasgroup.lab.ls.manchester.ac.uk/research/measuringmelanopicilluminance/
- This is for human observers at the cornea, in energy units. Normalized to peak
- of unity.
T_rods - CIE scotopic luminous efficiency function.
T_stiles2 - Stiles-Burch 2-degree color matching functions.
T_stiles10 - Stiles-Burch 10-degree color matching functions.
T_ss2000_Y2 - Stockman-Sharpe (2000) 2-degree photopic luminance efficiency function. See CVRL database.
T_vos1978_Y - Judd-Vos 1978 photopic luminance efficiency function.
T_xyz1931 - CIE 1931 color matching functions (2-degree).
T_xyz1964 - CIE 1964 supplemental color matching functions (10-deg).
T_xyzCIEPhys2 - CIE XYZ CMF’s based on CIE 2-deg cone fundamentals.
- Obtained in 2016 from CVRL. At this time, these are proposed.
T_xyzCIEPhys10 - CIE XYZ CMF’s based on CIE 10-deg cone fundamentals.
- Obtained in 2016 from CVRL. At this time, these are proposed.
T_xyzJuddVos - Judd-Vos modified color matching functions. | {"url":"http://psychtoolbox.org/docs/PsychColorimetricMatFiles","timestamp":"2024-11-14T12:04:04Z","content_type":"text/html","content_length":"16749","record_id":"<urn:uuid:99836ab9-347a-40d0-92b2-b7b51a72fe91>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00316.warc.gz"} |
lacunaritycovariance citation info
Hingee K, Baddeley A, Caccetta P, Nair G (2019). “Computation of lacunarity from covariance of spatial binary maps.” Journal of Agricultural, Biological and Environmental Statistics. DOI: 10.1007
/s13253-019-00351-9. Online First.
Hingee K (2019). Spatial Statistics of Random Closed Sets for Earth Observations. Ph.D. thesis, School of Physics, Computing and Mathematics. Submitted thesis.
Corresponding BibTeX entries:
title = {Computation of lacunarity from covariance of spatial
binary maps},
author = {Kassel Hingee and Adrian Baddeley and Peter Caccetta and
Gopalan Nair},
year = {2019},
journal = {Journal of Agricultural, Biological and Environmental
note = {DOI: 10.1007/s13253-019-00351-9. Online First.},
title = {Spatial Statistics of Random Closed Sets for Earth
author = {Kassel Hingee},
year = {2019},
note = {Submitted thesis},
school = {School of Physics, Computing and Mathematics},
institution = {The University of Western Australia}, | {"url":"http://cran.stat.auckland.ac.nz/web/packages/lacunaritycovariance/citation.html","timestamp":"2024-11-04T21:43:35Z","content_type":"text/html","content_length":"1957","record_id":"<urn:uuid:0a7932fe-d9d5-431b-85b9-f7818bdfe66f>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00284.warc.gz"} |