content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Aditya Dhumuntarao
Postdoctoral Appointee, Quantum Information Science
Aditya joined Sandia as a postdoctoral appointee at the Quantum Performance Laboratory (QPL) in 2023. His research interests lie at the intersections of quantum information theory, and theoretical
physics. At Sandia, Aditya draws inspiration from his physics background in holographic dualities, entanglement dynamics, and classical field theories to shed light on quantum information processors
and quantum computers. At the QPL, Aditya is engaged in developing more efficient forward simulators using path integral methods and in constructing novel benchmarks applicable to near-term quantum
devices. With members at the Quantum Applications and Algorithms Collaboratory, Aditya is working on the foundations of qubit holographies.
Ph.D. Theoretical Physics University of Minnesota 2023
M.A.St Applied Mathematics & Theoretical Physics University of Cambridge 2017
B.S. (Double) Mathematics & Physics Arizona State University 2016
Research Interests
• Quantum Information Theory
• Classical Geometries
• Holographic Dualities
• University of Minnesota, Doctoral Dissertation Fellow (2021–2022)
• National Science Foundation, Graduate Research Fellow (2018–2021)
• Perimeter Institute for Theoretical Physics, Graduate Research Fellow (2017–2018) | {"url":"https://www.sandia.gov/people/staff/aditya-dhumuntarao/","timestamp":"2024-11-13T09:00:30Z","content_type":"text/html","content_length":"33312","record_id":"<urn:uuid:2f46c76c-3d95-4fb0-9356-417ce0344780>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00865.warc.gz"} |
Introduction to Statistical Learning | Why do we Need Statistical Learning?
Updated June 16, 2023
Overview of Statistical Learning
People can comprehend statistics as a study of collecting and analyzing data. Statistical Learning servers as a means to extract facts and summarize the available data. Since the 18th century, people
have predominantly utilized statistics for taxation and military purposes. Later towards the end of the 20th century, with the advent of computers, the applications of statistical concepts broadened
with its contributions towards technologies such as Machine Learning and Neural Nets. In this topic, we will learn about Introduction to Statistical Learning.
Statistical Learning enables data prediction and classification by effectively handling large volumes of data. It involves performing numerous iterations to analyze and select the most valuable and
relevant data, ultimately leading to an optimized result.
What is Statistical Learning?
Data is the fuel that drives Statistical Learning, and statistics are all about making sense of the data in hand. The results obtained from statistical learning help us determine trends and predict a
possible outcome for the future.
Statistical Learning is a tool to accomplish the goals of supervised and unsupervised Machine Learning techniques. With supervised statistical learning, we get to predict or estimate an outcome based
on previously present output, whereas, with unsupervised statistical learning, we find various patterns present within the data by clustering them into similar groups.
This article shows Supervised Statistical Learning methodologies, namely Regression and Classification.
1. Regression
Ever wondered how stock market predictions work? Or how a realtor estimates a house price? Or want to know if a new car in the market is worth the buy? If yes, you can find answers to these in
Regression’s statistical methodology. We utilize regression equations and analysis to make unbiased and accurate predictions of quantitative data. In addition, regression Analysis helps us to
identify the relationship between two or more variables.
In Simple Linear Regression (SLR), the relationship between a dependent variable (Y) and an independent variable (X) is determined. The equation illustrated below estimates how any change in X will
affect Y.
Bias-Variance Trade-off:
Linear Regression is all about finding the best fit straight line. Errors in regression models are mainly due to bias and variance. Minimizing these two prediction errors is essential to obtain a
generalized model that works well on training and testing data sets.
The linear Regression Model assumes the target variable has a linear relationship with its features. In reality, though, this might not be the case, and the inability of the Linear Regression model
to capture the true relationship is termed bias. The error due to bias is determined by calculating the difference between predicted and actual values.
The variance gives us a picture of how far the data points under consideration are spread. The Variance error refers to the fluctuations in the predictions when data sets are changed and are
calculated as the variability of a model prediction from a given data point.
Consider the scenarios where a model has high bias and low variance; then, it is likely to be less complex and probably will tend to underfit the data. If the model has low bias and high variance, it
will likely overfit the data, making it more complex and inconsistent when tried for unseen inputs. Hence to avoid such scenarios, there is a need to come to a common ground w.r.t the bias and
variance to have an acceptable model.
An ideal model is selected to have a low bias that can capture the proper relationship between its variables and low variance that produces consistent predictions across different datasets. This can
be achieved by obtaining a sweet spot between a simple and complex Regression Model. Regularization, bagging, and boosting help achieve the sweat spot.
2. Classification
Classification is applied to qualitative (non-numeric) data wherein the target variable can be classified or grouped into two (Binary Classification) or more classes (Multi-Class Classification).
Examples of Classification Statistical Learning include Tagging an e-mail as “spam” or “ham,” predicting customer churn, classifying animals based on their breeds, etc.
In classification, the output is often obtained using probabilistic approaches so that the results from the statistical inference give out a probability of an instance belonging to a class rather
than just assigning the best class.
Logistic Regression:
People widely use Logistic Regression as one of the classification algorithms for binary classification. This model uses a logistic function to determine the target value between the range of 0 to 1
and can be represented as the Sigmoid function shown below.
Why do we Need Statistical Learning?
In today’s age, if one thing is becoming more abundant than natural resources, then that ought to be Data. A million bytes of data we generate daily need a source for analyzing and summering them. If
not used wisely, people can easily misinterpret or manipulate these data to showcase only a particular point of view. Therefore, to avoid dangerous mishaps with data, Statistical Learning becomes a
tool to ensure data integrity and proper and efferent usage.
Statistical Learning helps us understand why a system behaves the way it does. It reduces ambiguity and produces results that matter in the real world. Statistical Learning provides accurate results
that can find medical, business, banking, and government applications.
Easily identifies patterns and trends. With the identified trends, targeting specific customers for specific products becomes more accessible.
Saves time. Hundreds and thousands of epochs for achieving the optimized result are possible within a few minutes.
Can work with large numbers and a wide variety of parameters.
Improves Decision Making and Prediction techniques by logically analyzing the data rather than calling shots based on “gut feeling.”
Once the system is functional, no human intervention is required except for occasional updates to maintain its functionality.
Conclusion – Introduction to Statistical Learning
With our advancing technologies, we now deal with more statistics in our daily lives than ever. The correct interpretation of the stories told by every billion bytes of data we accumulate is
impossible without intersecting statistics with other branches such as Data Mining, Machine Learning, and Artificial Intelligence.
Recommended Articles
This is a guide to Introduction to Statistical Learning. Here we discuss the introduction, why do we need statistical learning and advantages. You may also have a look at the following articles to
learn more – | {"url":"https://www.educba.com/introduction-to-statistical-learning/","timestamp":"2024-11-05T00:22:10Z","content_type":"text/html","content_length":"317310","record_id":"<urn:uuid:c3729436-597b-4f41-8f8d-51049e80cc9a>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00755.warc.gz"} |
Archimedes of Syracuse
(Greek: Ἀρχιμήδης)
Archimedes Thoughtful by Fetti (1620)
c. 287 BC
Born Syracuse, Sicily
Magna Graecia
Died c. 212 BC (aged around 75)
Residence Syracuse, Sicily
Fields Engineering
Archimedes' principle
Archimedes' screw
Known for hydrostatics
Archimedes of Syracuse (Greek: Ἀρχιμήδης; c.287 BC – c.212 BC) was a Greek mathematician, physicist, engineer, inventor, and astronomer. Although few details of his life are known, he is regarded
as one of the leading scientists in classical antiquity. Among his advances in physics are the foundations of hydrostatics, statics and an explanation of the principle of the lever. He is credited
with designing innovative machines, including siege engines and the screw pump that bears his name. Modern experiments have tested claims that Archimedes designed machines capable of lifting
attacking ships out of the water and setting ships on fire using an array of mirrors.
Archimedes is generally considered to be the greatest mathematician of antiquity and one of the greatest of all time. He used the method of exhaustion to calculate the area under the arc of a
parabola with the summation of an infinite series, and gave a remarkably accurate approximation of pi. He also defined the spiral bearing his name, formulae for the volumes of surfaces of revolution
and an ingenious system for expressing very large numbers.
Archimedes died during the Siege of Syracuse when he was killed by a Roman soldier despite orders that he should not be harmed. Cicero describes visiting the tomb of Archimedes, which was surmounted
by a sphere inscribed within a cylinder. Archimedes had proven that the sphere has two thirds of the volume and surface area of the cylinder (including the bases of the latter), and regarded this as
the greatest of his mathematical achievements.
Unlike his inventions, the mathematical writings of Archimedes were little known in antiquity. Mathematicians from Alexandria read and quoted him, but the first comprehensive compilation was not made
until c. 530 AD by Isidore of Miletus, while commentaries on the works of Archimedes written by Eutocius in the sixth century AD opened them to wider readership for the first time. The relatively few
copies of Archimedes' written work that survived through the Middle Ages were an influential source of ideas for scientists during the Renaissance, while the discovery in 1906 of previously unknown
works by Archimedes in the Archimedes Palimpsest has provided new insights into how he obtained mathematical results.
Archimedes was born c. 287 BC in the seaport city of Syracuse, Sicily, at that time a self-governing colony in Magna Graecia. The date of birth is based on a statement by the Byzantine Greek
historian John Tzetzes that Archimedes lived for 75 years. In The Sand Reckoner, Archimedes gives his father's name as Phidias, an astronomer about whom nothing is known. Plutarch wrote in his
Parallel Lives that Archimedes was related to King Hiero II, the ruler of Syracuse. A biography of Archimedes was written by his friend Heracleides but this work has been lost, leaving the details of
his life obscure. It is unknown, for instance, whether he ever married or had children. During his youth, Archimedes may have studied in Alexandria, Egypt, where Conon of Samos and Eratosthenes of
Cyrene were contemporaries. He referred to Conon of Samos as his friend, while two of his works ( The Method of Mechanical Theorems and the Cattle Problem) have introductions addressed to
Archimedes died c. 212 BC during the Second Punic War, when Roman forces under General Marcus Claudius Marcellus captured the city of Syracuse after a two-year-long siege. According to the popular
account given by Plutarch, Archimedes was contemplating a mathematical diagram when the city was captured. A Roman soldier commanded him to come and meet General Marcellus but he declined, saying
that he had to finish working on the problem. The soldier was enraged by this, and killed Archimedes with his sword. Plutarch also gives a lesser-known account of the death of Archimedes which
suggests that he may have been killed while attempting to surrender to a Roman soldier. According to this story, Archimedes was carrying mathematical instruments, and was killed because the soldier
thought that they were valuable items. General Marcellus was reportedly angered by the death of Archimedes, as he considered him a valuable scientific asset and had ordered that he not be harmed.
The last words attributed to Archimedes are "Do not disturb my circles" (Greek: μή μου τοὺς κύκλους τάραττε), a reference to the circles in the mathematical drawing that he was supposedly studying
when disturbed by the Roman soldier. This quote is often given in Latin as " Noli turbare circulos meos," but there is no reliable evidence that Archimedes uttered these words and they do not appear
in the account given by Plutarch.
The tomb of Archimedes carried a sculpture illustrating his favorite mathematical proof, consisting of a sphere and a cylinder of the same height and diameter. Archimedes had proven that the volume
and surface area of the sphere are two thirds that of the cylinder including its bases. In 75 BC, 137 years after his death, the Roman orator Cicero was serving as quaestor in Sicily. He had heard
stories about the tomb of Archimedes, but none of the locals was able to give him the location. Eventually he found the tomb near the Agrigentine gate in Syracuse, in a neglected condition and
overgrown with bushes. Cicero had the tomb cleaned up, and was able to see the carving and read some of the verses that had been added as an inscription. A tomb discovered in a hotel courtyard in
Syracuse in the early 1960s was claimed to be that of Archimedes, but its location today is unknown.
The standard versions of the life of Archimedes were written long after his death by the historians of Ancient Rome. The account of the siege of Syracuse given by Polybius in his Universal History
was written around seventy years after Archimedes' death, and was used subsequently as a source by Plutarch and Livy. It sheds little light on Archimedes as a person, and focuses on the war machines
that he is said to have built in order to defend the city.
Discoveries and inventions
Archimedes' principle
The most widely known anecdote about Archimedes tells of how he invented a method for determining the volume of an object with an irregular shape. According to Vitruvius, a votive crown for a temple
had been made for King Hiero II, who had supplied the pure gold to be used, and Archimedes was asked to determine whether some silver had been substituted by the dishonest goldsmith. Archimedes had
to solve the problem without damaging the crown, so he could not melt it down into a regularly shaped body in order to calculate its density. While taking a bath, he noticed that the level of the
water in the tub rose as he got in, and realized that this effect could be used to determine the volume of the crown. For practical purposes water is incompressible, so the submerged crown would
displace an amount of water equal to its own volume. By dividing the mass of the crown by the volume of water displaced, the density of the crown could be obtained. This density would be lower than
that of gold if cheaper and less dense metals had been added. Archimedes then took to the streets naked, so excited by his discovery that he had forgotten to dress, crying " Eureka!" (Greek: "εὕρηκα
!," meaning "I have found it!"). The test was conducted successfully, proving that silver had indeed been mixed in.
The story of the golden crown does not appear in the known works of Archimedes. Moreover, the practicality of the method it describes has been called into question, due to the extreme accuracy with
which one would have to measure the water displacement. Archimedes may have instead sought a solution that applied the principle known in hydrostatics as Archimedes' principle, which he describes in
his treatise On Floating Bodies. This principle states that a body immersed in a fluid experiences a buoyant force equal to the weight of the fluid it displaces. Using this principle, it would have
been possible to compare the density of the golden crown to that of solid gold by balancing the crown on a scale with a gold reference sample, then immersing the apparatus in water. The difference in
density between the two samples would cause the scale to tip accordingly. Galileo considered it "probable that this method is the same that Archimedes followed, since, besides being very accurate, it
is based on demonstrations found by Archimedes himself." In a 12th century text titled Mappae clavicula there are instructions on how to perform the weighings in the water in order to calculate the
percentage of silver used, and thus solve the problem. The Latin poem Carmen de ponderibus et mensuris of the 4th or 5th century describes the use of a hydrostatic balance to solve the problem of the
crown, and attributes the method to Archimedes.
Archimedes' screw
A large part of Archimedes' work in engineering arose from fulfilling the needs of his home city of Syracuse. The Greek writer Athenaeus of Naucratis described how King Hiero II commissioned
Archimedes to design a huge ship, the Syracusia, which could be used for luxury travel, carrying supplies, and as a naval warship. The Syracusia is said to have been the largest ship built in
classical antiquity. According to Athenaeus, it was capable of carrying 600 people and included garden decorations, a gymnasium and a temple dedicated to the goddess Aphrodite among its facilities.
Since a ship of this size would leak a considerable amount of water through the hull, the Archimedes screw was purportedly developed in order to remove the bilge water. Archimedes' machine was a
device with a revolving screw-shaped blade inside a cylinder. It was turned by hand, and could also be used to transfer water from a low-lying body of water into irrigation canals. The Archimedes
screw is still in use today for pumping liquids and granulated solids such as coal and grain. The Archimedes screw described in Roman times by Vitruvius may have been an improvement on a screw pump
that was used to irrigate the Hanging Gardens of Babylon. The world's first seagoing steamship with a screw propeller was the SS Archimedes, which was launched in 1839 and named in honour of
Archimedes and his work on the screw.
Claw of Archimedes
The Claw of Archimedes is a weapon that he is said to have designed in order to defend the city of Syracuse. Also known as "the ship shaker," the claw consisted of a crane-like arm from which a large
metal grappling hook was suspended. When the claw was dropped onto an attacking ship the arm would swing upwards, lifting the ship out of the water and possibly sinking it. There have been modern
experiments to test the feasibility of the claw, and in 2005 a television documentary entitled Superweapons of the Ancient World built a version of the claw and concluded that it was a workable
Heat ray
The 2nd century AD author Lucian wrote that during the Siege of Syracuse (c. 214–212 BC), Archimedes destroyed enemy ships with fire. Centuries later, Anthemius of Tralles mentions burning-glasses as
Archimedes' weapon. The device, sometimes called the "Archimedes heat ray", was used to focus sunlight onto approaching ships, causing them to catch fire.
This purported weapon has been the subject of ongoing debate about its credibility since the Renaissance. René Descartes rejected it as false, while modern researchers have attempted to recreate the
effect using only the means that would have been available to Archimedes. It has been suggested that a large array of highly polished bronze or copper shields acting as mirrors could have been
employed to focus sunlight onto a ship. This would have used the principle of the parabolic reflector in a manner similar to a solar furnace.
A test of the Archimedes heat ray was carried out in 1973 by the Greek scientist Ioannis Sakkas. The experiment took place at the Skaramagas naval base outside Athens. On this occasion 70 mirrors
were used, each with a copper coating and a size of around five by three feet (1.5 by 1 m). The mirrors were pointed at a plywood mock-up of a Roman warship at a distance of around 160 feet (50 m).
When the mirrors were focused accurately, the ship burst into flames within a few seconds. The plywood ship had a coating of tar paint, which may have aided combustion. A coating of tar would have
been commonplace on ships in the classical era.
In October 2005 a group of students from the Massachusetts Institute of Technology carried out an experiment with 127 one-foot (30 cm) square mirror tiles, focused on a mock-up wooden ship at a range
of around 100 feet (30 m). Flames broke out on a patch of the ship, but only after the sky had been cloudless and the ship had remained stationary for around ten minutes. It was concluded that the
device was a feasible weapon under these conditions. The MIT group repeated the experiment for the television show MythBusters, using a wooden fishing boat in San Francisco as the target. Again some
charring occurred, along with a small amount of flame. In order to catch fire, wood needs to reach its autoignition temperature, which is around 300 °C (570 °F).
When MythBusters broadcast the result of the San Francisco experiment in January 2006, the claim was placed in the category of "busted" (or failed) because of the length of time and the ideal weather
conditions required for combustion to occur. It was also pointed out that since Syracuse faces the sea towards the east, the Roman fleet would have had to attack during the morning for optimal
gathering of light by the mirrors. MythBusters also pointed out that conventional weaponry, such as flaming arrows or bolts from a catapult, would have been a far easier way of setting a ship on fire
at short distances.
In December 2010, MythBusters again looked at the heat ray story in a special edition featuring Barack Obama, entitled President's Challenge. Several experiments were carried out, including a large
scale test with 500 schoolchildren aiming mirrors at a mock-up of a Roman sailing ship 400 feet (120 m) away. In all of the experiments, the sail failed to reach the 210 °C (410 °F) required to catch
fire, and the verdict was again "busted". The show concluded that a more likely effect of the mirrors would have been blinding, dazzling, or distracting the crew of the ship.
Other discoveries and inventions
While Archimedes did not invent the lever, he gave an explanation of the principle involved in his work On the Equilibrium of Planes. Earlier descriptions of the lever are found in the Peripatetic
school of the followers of Aristotle, and are sometimes attributed to Archytas. According to Pappus of Alexandria, Archimedes' work on levers caused him to remark: "Give me a place to stand on, and I
will move the Earth." (Greek: δῶς μοι πᾶ στῶ καὶ τὰν γᾶν κινάσω) Plutarch describes how Archimedes designed block-and-tackle pulley systems, allowing sailors to use the principle of leverage to lift
objects that would otherwise have been too heavy to move. Archimedes has also been credited with improving the power and accuracy of the catapult, and with inventing the odometer during the First
Punic War. The odometer was described as a cart with a gear mechanism that dropped a ball into a container after each mile traveled.
Cicero (106–43 BC) mentions Archimedes briefly in his dialogue De re publica, which portrays a fictional conversation taking place in 129 BC. After the capture of Syracuse c. 212 BC, General Marcus
Claudius Marcellus is said to have taken back to Rome two mechanisms, constructed by Archimedes and used as aids in astronomy, which showed the motion of the Sun, Moon and five planets. Cicero
mentions similar mechanisms designed by Thales of Miletus and Eudoxus of Cnidus. The dialogue says that Marcellus kept one of the devices as his only personal loot from Syracuse, and donated the
other to the Temple of Virtue in Rome. Marcellus' mechanism was demonstrated, according to Cicero, by Gaius Sulpicius Gallus to Lucius Furius Philus, who described it thus:
Hanc sphaeram Gallus cum moveret, fiebat ut soli luna totidem conversionibus in aere illo quot diebus in ipso caelo succederet, ex quo et in caelo sphaera solis fieret eadem illa defectio, et
incideret luna tum in eam metam quae esset umbra terrae, cum sol e regione. — When Gallus moved the globe, it happened that the Moon followed the Sun by as many turns on that bronze contrivance
as in the sky itself, from which also in the sky the Sun's globe became to have that same eclipse, and the Moon came then to that position which was its shadow on the Earth, when the Sun was in
This is a description of a planetarium or orrery. Pappus of Alexandria stated that Archimedes had written a manuscript (now lost) on the construction of these mechanisms entitled On Sphere-Making.
Modern research in this area has been focused on the Antikythera mechanism, another device from classical antiquity that was probably designed for the same purpose. Constructing mechanisms of this
kind would have required a sophisticated knowledge of differential gearing. This was once thought to have been beyond the range of the technology available in ancient times, but the discovery of the
Antikythera mechanism in 1902 has confirmed that devices of this kind were known to the ancient Greeks.
While he is often regarded as a designer of mechanical devices, Archimedes also made contributions to the field of mathematics. Plutarch wrote: "He placed his whole affection and ambition in those
purer speculations where there can be no reference to the vulgar needs of life."
Archimedes was able to use infinitesimals in a way that is similar to modern integral calculus. Through proof by contradiction ( reductio ad absurdum), he could give answers to problems to an
arbitrary degree of accuracy, while specifying the limits within which the answer lay. This technique is known as the method of exhaustion, and he employed it to approximate the value of π. In
Measurement of a Circle he did this by drawing a larger regular hexagon outside a circle and a smaller regular hexagon inside the circle, and progressively doubling the number of sides of each
regular polygon, calculating the length of a side of each polygon at each step. As the number of sides increases, it becomes a more accurate approximation of a circle. After four such steps, when the
polygons had 96 sides each, he was able to determine that the value of π lay between 3^1⁄[7] (approximately 3.1429) and 3^10⁄[71] (approximately 3.1408), consistent with its actual value of
approximately 3.1416. He also proved that the area of a circle was equal to π multiplied by the square of the radius of the circle (πr^2). In On the Sphere and Cylinder, Archimedes postulates that
any magnitude when added to itself enough times will exceed any given magnitude. This is the Archimedean property of real numbers.
In Measurement of a Circle, Archimedes gives the value of the square root of 3 as lying between ^265⁄[153] (approximately 1.7320261) and ^1351⁄[780] (approximately 1.7320512). The actual value is
approximately 1.7320508, making this a very accurate estimate. He introduced this result without offering any explanation of how he had obtained it. This aspect of the work of Archimedes caused John
Wallis to remark that he was: "as it were of set purpose to have covered up the traces of his investigation as if he had grudged posterity the secret of his method of inquiry while he wished to
extort from them assent to his results." It is possible that he used an iterative procedure to calculate these values.
In The Quadrature of the Parabola, Archimedes proved that the area enclosed by a parabola and a straight line is ^4⁄[3] times the area of a corresponding inscribed triangle as shown in the figure at
right. He expressed the solution to the problem as an infinite geometric series with the common ratio ^1⁄[4]:
$\sum_{n=0}^\infty 4^{-n} = 1 + 4^{-1} + 4^{-2} + 4^{-3} + \cdots = {4\over 3}. \;$
If the first term in this series is the area of the triangle, then the second is the sum of the areas of two triangles whose bases are the two smaller secant lines, and so on. This proof uses a
variation of the series 1/4 + 1/16 + 1/64 + 1/256 + · · · which sums to ^1⁄[3].
In The Sand Reckoner, Archimedes set out to calculate the number of grains of sand that the universe could contain. In doing so, he challenged the notion that the number of grains of sand was too
large to be counted. He wrote: "There are some, King Gelo (Gelo II, son of Hiero II), who think that the number of the sand is infinite in multitude; and I mean by the sand not only that which exists
about Syracuse and the rest of Sicily but also that which is found in every region whether inhabited or uninhabited." To solve the problem, Archimedes devised a system of counting based on the
myriad. The word is from the Greek μυριάς murias, for the number 10,000. He proposed a number system using powers of a myriad of myriads (100 million) and concluded that the number of grains of sand
required to fill the universe would be 8 vigintillion, or 8×10^63.
The works of Archimedes were written in Doric Greek, the dialect of ancient Syracuse. The written work of Archimedes has not survived as well as that of Euclid, and seven of his treatises are known
to have existed only through references made to them by other authors. Pappus of Alexandria mentions On Sphere-Making and another work on polyhedra, while Theon of Alexandria quotes a remark about
refraction from the now-lost Catoptrica. During his lifetime, Archimedes made his work known through correspondence with the mathematicians in Alexandria. The writings of Archimedes were collected by
the Byzantine architect Isidore of Miletus (c. 530 AD), while commentaries on the works of Archimedes written by Eutocius in the sixth century AD helped to bring his work a wider audience.
Archimedes' work was translated into Arabic by Thābit ibn Qurra (836–901 AD), and Latin by Gerard of Cremona (c. 1114–1187 AD). During the Renaissance, the Editio Princeps (First Edition) was
published in Basel in 1544 by Johann Herwagen with the works of Archimedes in Greek and Latin. Around the year 1586 Galileo Galilei invented a hydrostatic balance for weighing metals in air and water
after apparently being inspired by the work of Archimedes.
Surviving works
• On the Equilibrium of Planes (two volumes)
The first book is in fifteen propositions with seven postulates, while the second book is in ten propositions. In this work Archimedes explains the Law of the Lever, stating, "Magnitudes are in
equilibrium at distances reciprocally proportional to their weights."
Archimedes uses the principles derived to calculate the areas and centers of gravity of various geometric figures including triangles, parallelograms and parabolas.
• On the Measurement of a Circle
This is a short work consisting of three propositions. It is written in the form of a correspondence with Dositheus of Pelusium, who was a student of Conon of Samos. In Proposition II, Archimedes
gives an approximation of the value of pi (π), showing that it is greater than ^223⁄[71] and less than ^22⁄[7].
This work of 28 propositions is also addressed to Dositheus. The treatise defines what is now called the Archimedean spiral. It is the locus of points corresponding to the locations over time of
a point moving away from a fixed point with a constant speed along a line which rotates with constant angular velocity. Equivalently, in polar coordinates (r, θ) it can be described by the
$\, r=a+b\theta$
with real numbers a and b. This is an early example of a mechanical curve (a curve traced by a moving point) considered by a Greek mathematician.
• On the Sphere and the Cylinder (two volumes)
In this treatise addressed to Dositheus, Archimedes obtains the result of which he was most proud, namely the relationship between a sphere and a circumscribed cylinder of the same height and
diameter. The volume is ^4⁄[3]πr^3 for the sphere, and 2πr^3 for the cylinder. The surface area is 4πr^2 for the sphere, and 6πr^2 for the cylinder (including its two bases), where r is the
radius of the sphere and cylinder. The sphere has a volume two-thirds that of the circumscribed cylinder. Similarly, the sphere has an area two-thirds that of the cylinder (including the bases).
A sculpted sphere and cylinder were placed on the tomb of Archimedes at his request.
This is a work in 32 propositions addressed to Dositheus. In this treatise Archimedes calculates the areas and volumes of sections of cones, spheres, and paraboloids.
• On Floating Bodies (two volumes)
In the first part of this treatise, Archimedes spells out the law of equilibrium of fluids, and proves that water will adopt a spherical form around a centre of gravity. This may have been an
attempt at explaining the theory of contemporary Greek astronomers such as Eratosthenes that the Earth is round. The fluids described by Archimedes are not self-gravitating, since he assumes the
existence of a point towards which all things fall in order to derive the spherical shape.
In the second part, he calculates the equilibrium positions of sections of paraboloids. This was probably an idealization of the shapes of ships' hulls. Some of his sections float with the base
under water and the summit above water, similar to the way that icebergs float. Archimedes' principle of buoyancy is given in the work, stated as follows:
Any body wholly or partially immersed in a fluid experiences an upthrust equal to, but opposite in sense to, the weight of the fluid displaced.
• The Quadrature of the Parabola
In this work of 24 propositions addressed to Dositheus, Archimedes proves by two methods that the area enclosed by a parabola and a straight line is 4/3 multiplied by the area of a triangle with
equal base and height. He achieves this by calculating the value of a geometric series that sums to infinity with the ratio ^1⁄[4].
This is a dissection puzzle similar to a Tangram, and the treatise describing it was found in more complete form in the Archimedes Palimpsest. Archimedes calculates the areas of the 14 pieces
which can be assembled to form a square. Research published by Dr. Reviel Netz of Stanford University in 2003 argued that Archimedes was attempting to determine how many ways the pieces could be
assembled into the shape of a square. Dr. Netz calculates that the pieces can be made into a square 17,152 ways. The number of arrangements is 536 when solutions that are equivalent by rotation
and reflection have been excluded. The puzzle represents an example of an early problem in combinatorics.
The origin of the puzzle's name is unclear, and it has been suggested that it is taken from the Ancient Greek word for throat or gullet, stomachos (στόμαχος). Ausonius refers to the puzzle as
Ostomachion, a Greek compound word formed from the roots of ὀστέον (osteon, bone) and μάχη (machē – fight). The puzzle is also known as the Loculus of Archimedes or Archimedes' Box.
• Archimedes' cattle problem
This work was discovered by Gotthold Ephraim Lessing in a Greek manuscript consisting of a poem of 44 lines, in the Herzog August Library in Wolfenbüttel, Germany in 1773. It is addressed to
Eratosthenes and the mathematicians in Alexandria. Archimedes challenges them to count the numbers of cattle in the Herd of the Sun by solving a number of simultaneous Diophantine equations.
There is a more difficult version of the problem in which some of the answers are required to be square numbers. This version of the problem was first solved by A. Amthor in 1880, and the answer
is a very large number, approximately 7.760271×10^206,544.
In this treatise, Archimedes counts the number of grains of sand that will fit inside the universe. This book mentions the heliocentric theory of the solar system proposed by Aristarchus of
Samos, as well as contemporary ideas about the size of the Earth and the distance between various celestial bodies. By using a system of numbers based on powers of the myriad, Archimedes
concludes that the number of grains of sand required to fill the universe is 8×10^63 in modern notation. The introductory letter states that Archimedes' father was an astronomer named Phidias.
The Sand Reckoner or Psammites is the only surviving work in which Archimedes discusses his views on astronomy.
• The Method of Mechanical Theorems
This treatise was thought lost until the discovery of the Archimedes Palimpsest in 1906. In this work Archimedes uses infinitesimals, and shows how breaking up a figure into an infinite number of
infinitely small parts can be used to determine its area or volume. Archimedes may have considered this method lacking in formal rigor, so he also used the method of exhaustion to derive the
results. As with The Cattle Problem, The Method of Mechanical Theorems was written in the form of a letter to Eratosthenes in Alexandria.
Apocryphal works
Archimedes' Book of Lemmas or Liber Assumptorum is a treatise with fifteen propositions on the nature of circles. The earliest known copy of the text is in Arabic. The scholars T. L. Heath and
Marshall Clagett argued that it cannot have been written by Archimedes in its current form, since it quotes Archimedes, suggesting modification by another author. The Lemmas may be based on an
earlier work by Archimedes that is now lost.
It has also been claimed that Heron's formula for calculating the area of a triangle from the length of its sides was known to Archimedes. However, the first reliable reference to the formula is
given by Heron of Alexandria in the 1st century AD.
Archimedes Palimpsest
The foremost document containing the work of Archimedes is the Archimedes Palimpsest. In 1906, the Danish professor Johan Ludvig Heiberg visited Constantinople and examined a 174-page goatskin
parchment of prayers written in the 13th century AD. He discovered that it was a palimpsest, a document with text that had been written over an erased older work. Palimpsests were created by scraping
the ink from existing works and reusing them, which was a common practice in the Middle Ages as vellum was expensive. The older works in the palimpsest were identified by scholars as 10th century AD
copies of previously unknown treatises by Archimedes. The parchment spent hundreds of years in a monastery library in Constantinople before being sold to a private collector in the 1920s. On October
29, 1998 it was sold at auction to an anonymous buyer for $2 million at Christie's in New York. The palimpsest holds seven treatises, including the only surviving copy of On Floating Bodies in the
original Greek. It is the only known source of The Method of Mechanical Theorems, referred to by Suidas and thought to have been lost forever. Stomachion was also discovered in the palimpsest, with a
more complete analysis of the puzzle than had been found in previous texts. The palimpsest is now stored at the Walters Art Museum in Baltimore, Maryland, where it has been subjected to a range of
modern tests including the use of ultraviolet and x-ray light to read the overwritten text.
The treatises in the Archimedes Palimpsest are: On the Equilibrium of Planes, On Spirals, Measurement of a Circle, On the Sphere and the Cylinder, On Floating Bodies, The Method of Mechanical
Theorems and Stomachion.
• There is a crater on the Moon named Archimedes (29.7° N, 4.0° W) in his honour, as well as a lunar mountain range, the Montes Archimedes (25.3° N, 4.6° W).
• The asteroid 3600 Archimedes is named after him.
• The Fields Medal for outstanding achievement in mathematics carries a portrait of Archimedes, along with a carving illustrating his proof on the sphere and the cylinder. The inscription around
the head of Archimedes is a quote attributed to him which reads in Latin: "Transire suum pectus mundoque potiri" (Rise above oneself and grasp the world).
• Archimedes has appeared on postage stamps issued by East Germany (1973), Greece (1983), Italy (1983), Nicaragua (1971), San Marino (1982), and Spain (1963).
• The exclamation of Eureka! attributed to Archimedes is the state motto of California. In this instance the word refers to the discovery of gold near Sutter's Mill in 1848 which sparked the
California Gold Rush.
• A movement for civic engagement targeting universal access to health care in the US state of Oregon has been named the "Archimedes Movement," headed by former Oregon Governor John Kitzhaber.
The Works of Archimedes online
• Text in Classical Greek: PDF scans of Heiberg's edition of the Works of Archimedes, now in the public domain
• In English translation: The Works of Archimedes, trans. T.L. Heath; supplemented by The Method of Mechanical Theorems, trans. L.G. Robinson | {"url":"https://ftp.worldpossible.org/endless/eos-rachel/RACHEL/RACHEL/modules/wikipedia_for_schools/wp/a/Archimedes.htm","timestamp":"2024-11-12T17:27:07Z","content_type":"text/html","content_length":"61861","record_id":"<urn:uuid:23ff7a08-f410-415f-aaec-c7b0e458649a>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00097.warc.gz"} |
New thread.
Here’s another probability question.
Your mate has a proposition.
You choose a number between 1 and 6.
You then roll three fair dice. If your number comes up he’ll pay you a quid, if not then you pay him a quid.
A fair game, right? You can choose any one of six numbers but with three dice you have a fifty-fifty chance of hitting your number.
Your mate see’s you’re hesitant, so he sweetens the deal. If your number comes up on two of the dice he’ll pay you double (£2).
Not enough? If all three dice show your number he’ll pay you triple (£3).
Your mate is practically a charity!
Should you play his game? If not, why not?
You’re frighteningly good at this.
It’s difficult for me to choose a favourite mathematician, but John Conway would make my “top ten”.
He didn’t get the next sequence, but once it was explained to him he performed detailed analysis of the sequence. Don’t feel bad if you don’t get it, but don’t google it either.
I’ll present the raw sequence and then add some punctuation to help.
That’s the sequence where the genius mathematician Conway couldn’t determine the next term (intimidated yet?).
No advanced maths is needed. A child could write the next term.
With punctuation and spacing to help see the pattern. Each term on a new line, commas and semicolons added for readability.
What’s next?
Between eating dinner late, scratching my head and trying to figure it out, my guess is 11122132 with me losing my way halfway through the latter part of the sequence. I'm perfectly happy to have the
errors of my ways explained!
Between eating dinner late, scratching my head and trying to figure it out, my guess is 11122132 with me losing my way halfway through the latter part of the sequence. I'm perfectly happy to have
the errors of my ways explained!
Here’s the sequence again.
Let’s look at the first term.
What do we have?
But the digits repeat in later terms. What do we have in the first term?
One 1.
Does that help?
Here’s the sequence again.
Let’s look at the first term.
What do we have?
But the digits repeat in later terms. What do we have in the first term?
One 1.
Does that help?
Not much unfortunately, I understood your tiered/pyramid interpretation better, 1,1,2- 1,1,3 etc could see a correlation forming there.
Here’s another probability question.
Your mate has a proposition.
You choose a number between 1 and 6.
You then roll three fair dice. If your number comes up he’ll pay you a quid, if not then you pay him a quid.
A fair game, right? You can choose any one of six numbers but with three dice you have a fifty-fifty chance of hitting your number.
Your mate see’s you’re hesitant, so he sweetens the deal. If your number comes up on two of the dice he’ll pay you double (£2).
Not enough? If all three dice show your number he’ll pay you triple (£3).
Your mate is practically a charity!
Should you play his game? If not, why not?
Aaagh I can't explain what I mean adequately...... No because your mate can't lose, and it's like an accumulator (well as I see it), the odds of you hitting your number on the trot are multiplied
each time becoming rarer to achieve. Something like that anyways.
You have a 1 in 6 chance with a die. 1 in 2 chance with a coin.
Not much unfortunately, I understood your tiered/pyramid interpretation better, 1,1,2- 1,1,3 etc could see a correlation forming there.
So it’s
We have one 1.
Let’s write that down. We have 11
Aaagh I can't explain what I mean adequately...... No because your mate can't lose, and it's like an accumulator (well as I see it), the odds of you hitting your number on the trot are multiplied
each time becoming rarer to achieve. Something like that anyways.
That’s a horrible feeling. You know something but can’t explain it. It’s like when someone asks you to define a word you know.
Maths is like that. When you’re young maths is about finding the answer (x=...).
Later, it becomes “show that...” or “prove that...”. They literally
you the answer, but the job becomes telling the story of why something is true. A much harder proposition.
It’s harder to tell a story why it’s a bad bet (and you’re right, it is).
I’ll wait and see if DA offers an explanation before I offer my proof.
The 'mate' isn't choosing his number either and has 5 in 6 chance of winning.
The 'mate' isn't choosing his number either and has 5 in 6 chance of winning.
But you roll three dice.
Say you bet on 6. If the dice came up 123 then you pay him a quid.
If it was 2,3,6 he pays you a quid.
If it was 2,6,6 he pays you two quid.
If the dice are all different then half the numbers pay out - a fifty/fifty bet. Half the time you win, half you lose. Like a coin toss.
If you get two or three matches then you make even more. So why’s it a bad bet?
So it’s
We have one 1.
Let’s write that down. We have 11
Right I'm being perfectly truthful I had to look up John Conway to gain an understanding of the sequence, otherwise I'd have chewed through my arms by now, but I honestly didn't check the answer,
just the method, so I still may be wrong: 1113213211.
Infuriatingly simple idea, yet confusing and brilliant at the same time, very clever mixing the word of the number with the actual numbers together, fools the brain into thinking pure number logical
sequence, rather than counting whilst you're going. If that makes any sense.
The odds are against the gambler as always.He needs to be convinced he might just win.
I think the probability is 1/6^n n being number of dice.
But await mathematical proof.
Right I'm being perfectly truthful I had to look up John Conway to gain an understanding of the sequence, otherwise I'd have chewed through my arms by now, but I honestly didn't check the answer,
just the method, so I still may be wrong: 1113213211.
Infuriatingly simple idea, yet confusing and brilliant at the same time, very clever mixing the word of the number with the actual numbers together, fools the brain into thinking pure number
logical sequence, rather than counting whilst you're going. If that makes any sense.
Maddening, isn’t it?
Can you imagine how hard Conway found it? I’m sure he considered all sorts of abstruse mathematics- yet a four-year old could have told him the answer.
FWIW I don’t think I worked it out when I first saw it.
The odds are against the gambler as always.He needs to be convinced he might just win.
I think the probability is 1/6^n n being number of dice.
But await mathematical proof.
Okay. Here’s what is called a “moral proof”. It explains why it’s true but isn’t algebraically rigorous.
Imagine I bet on every number.
I roll the dice and get three different numbers. Say 4,5,6
I lose a quid on 1,2 and 3
But I win a quid on 4,5 and 6
No effect. I lose three quid and win three quid. Evens out.
But say I’d rolled 5,6,6
I’d lose on the 1,2,3 and 4
I’d win on the 5 and win double on the 6
I’d lose £4 and win £3. Overall loss of £1.
Say I rolled 6,6,6
I’d lose on the 1,2,3,4,5
Win treble on the 6
I’d lose £5 and win £3. Overall loss £2
If you go through the algebra, the expected payout is about 94-95% (from memory). Better than a fruit machine or a bookie, but not a good bet.
Okay. Here’s what is called a “moral proof”. It explains why it’s true but isn’t algebraically rigorous.
Imagine I bet on every number.
I roll the dice and get three different numbers. Say 4,5,6
I lose a quid on 1,2 and 3
But I win a quid on 4,5 and 6
No effect. I lose three quid and win three quid. Evens out.
But say I’d rolled 5,6,6
I’d lose on the 1,2,3 and 4
I’d win on the 5 and win double on the 6
I’d lose £4 and win £3. Overall loss of £1.
Say I rolled 6,6,6
I’d lose on the 1,2,3,4,5
Win treble on the 6
I’d lose £5 and win £3. Overall loss £2
If you go through the algebra, the expected payout is about 94-95% (from memory). Better than a fruit machine or a bookie, but not a good bet.
That explanation has just lifted a small part of the brain freeze I've had since the first problem you posted earlier tonight
That explanation has just lifted a small part of the brain freeze I've had since the first problem you posted earlier tonight
Thank you, but DA started this trend to offer puzzles.
As a point of trivia. At school I played all sorts of card and dice game. My trousers were down my thighs with the change in my pockets.
Timid students hesitated to play poker with me, but dice? Where they can pick the number they’re betting on? They played that.
The game outlined here was a good share of my pocket money.
It’s a deceptive game. 5% edge doesn’t seem much. But over the long term (and dice is a fast game - lots of rolls per minute) it adds up. I just had to make sure I had enough money in my pockets to
be the bank until their luck changed.
I don’t think it’s an exaggeration to say that the majority of my maths education stemmed from a desire to fleece my classmates.
Try it. Give your missus/kids a load of pennies and play the bank. The bank doesn’t always win. It’s not a con. It feels fair (ha!). It’s more similar to the percentage of a roulette wheel. And maybe
your missus or children will feel like learning about probability theory afterwards.
Thank you, but DA started this trend to offer puzzles.
As a point of trivia. At school I played all sorts of card and dice game. My trousers were down my thighs with the change in my pockets.
Timid students hesitated to play poker with me, but dice? Where they can pick the number they’re betting on? They played that.
The game outlined here was a good share of my pocket money.
It’s a deceptive game. 5% edge doesn’t seem much. But over the long term (and dice is a fast game - lots of rolls per minute) it adds up. I just had to make sure I had enough money in my pockets
to be the bank until their luck changed.
I don’t think it’s an exaggeration to say that the majority of my maths education stemmed from a desire to fleece my classmates.
Try it. Give your missus/kids a load of pennies and play the bank. The bank doesn’t always win. It’s not a con. It feels fair (ha!). It’s more similar to the percentage of a roulette wheel. And
maybe your missus or children will feel like learning about probability theory afterwards.
Years ago a friend of mine studying maths at Bath University managed to subsidise the majority of his first years costs by participating in online casinos in Blackjack. More accurately, he would sign
up, be given £50-£100 of free bets on average, then on the basis of probability would play X amount of hands on the lowest blind amount until the stipulation of certain amount of games played ran
out, then would transfer his winnings/credit out of his account and close it down, to move onto the next online casino. He was regimental in sticking to a routine to twist on 16 and under or stick on
17 or over (I think). Either ways, he'd play in some cases a couple thousand games per casino with no thought or emotion, just sticking to his cut off points on probability, then cutting and running.
After about 4 months he'd amassed around £3000, just about enough to have covered his bar bill at the Students Union!
Ironically he ended up getting a Third, so, worse than if he hadn't bothered going to Uni, retrained and took dozens of exams and is now plying his trade as an Actuary.......
Deleted member 33931
Here’s the sequence again.
Let’s look at the first term.
What do we have?
But the digits repeat in later terms. What do we have in the first term?
One 1.
Does that help?
Yes, that helps.
I got it
Deleted member 33931
Thank you, but DA started this trend to offer puzzles.
As a point of trivia. At school I played all sorts of card and dice game. My trousers were down my thighs with the change in my pockets.
Timid students hesitated to play poker with me, but dice? Where they can pick the number they’re betting on? They played that.
The game outlined here was a good share of my pocket money.
It’s a deceptive game. 5% edge doesn’t seem much. But over the long term (and dice is a fast game - lots of rolls per minute) it adds up. I just had to make sure I had enough money in my pockets
to be the bank until their luck changed.
I don’t think it’s an exaggeration to say that the majority of my maths education stemmed from a desire to fleece my classmates.
Try it. Give your missus/kids a load of pennies and play the bank. The bank doesn’t always win. It’s not a con. It feels fair (ha!). It’s more similar to the percentage of a roulette wheel. And
maybe your missus or children will feel like learning about probability theory afterwards.
Is there only a 5% edge using that 3-dice trick?
Yes, that helps.
I got it
Yeah you got it now, try it 10pm on a Saturday night after a couple of lagers, and a couple hours of math sequences, with your head going round in circles at all the infinite possibilities!!!!! No
gold star for you matey! | {"url":"https://community.screwfix.com/threads/new-thread.197954/page-6","timestamp":"2024-11-14T00:09:07Z","content_type":"text/html","content_length":"182670","record_id":"<urn:uuid:d848c041-5505-4ab5-b3b8-2b6fc5e23034>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00535.warc.gz"} |
What is .375 As A Fraction: How To Find GCF? Factorization & Division Methods
In mathematics, fraction values and decimal values can be expressed in multiple distinct ways. However, nothing to worry how they are expressed, equivalent decimals and fractions always denote the
similar value. It is essential to learn how to correctly and rapidly express decimals such as .375 as a fraction has been a useful and practical mathematical skill that every student must have. In
this article, you will get information about .375 as a fraction.
Right Answer
.375 as a fraction equals to 3/8, however, you need to understand why.
Why is .375 Equivalent to 3/8?
First of all, it is important to determine the last digit’s place value which is present to the right of the decimal point. For example, 5 is the last digit that is in the 1000^th place value slot
(1/1000). With the help of a place value chart, an individual can easily figure out the place value of the final digit.
Place Value Chart
Thousands 1,000
Hundreds 100
Tens 10
Ones 1 0
Decimal .
Tenths 1/10 3
Hundreds 1/100 7
Thousands 1/1000 5
What is the Simplest Form of .375 as a Fraction?
A fraction is a selection or portion of any quantity out of entire, where the whole can be a thing, a specific value and any number. However, you can define a decimal number as a number through which
the whole fractional part and number part are distinguished by a decimal part. In this article, we will learn how to express .375 as a fraction in the simplest form.
How to Simplify .375 as a Fraction?
You need to follow some steps if you want to express .375 as a fraction in the simplest form.
First of all, you need to write a given number on the numerator and put 1 in the denominator right below the decimal point. Moreover, this decimal point is followed by the number of zeros needed
accordingly. In this way, there are 3 numbers of .375 after the decimal. You must remove the decimal point by placing 1000 in the denominator. Most importantly, this will make it 375/1000.
Now, we will learn how to express 375 over 1000 in the simplest form. Moreover, you can observe that 375/1000 is not in the simplified form. Therefore, the Highest Common Factor (HCF) of 375, 1000 =
125. However, you can simplify this fraction as follows. 375/1000=3/8. We have successfully reduced the fraction of 375/1000 to 3/8. In other meanings, 3/8 is the lowest term of .375 as a fraction.
Note: .375 as a fraction in simplified form can be expressed as 3/8.
How Can We Reduce A Fraction?
Reducing a fraction represents the denominator and numerator numbers of a fraction are smaller, if both numbers are divided by the similar common factor. Therefore, the fraction value will be
different, while the decimal value of the fraction will be the same. You need to find a common factor to determine whether it is possible to reduce a fraction to smaller numbers. Similarly, it is
essential to reduce once, if you can find the highest common factor for both the numbers. You need to do multiple reductions, if you find anything but the highest common factor.
Furthermore, if you choose the highest common factor approach or other approach then here is how to reduce our fraction. You need to follow some steps if you want to reduce a fraction.
1. Finding All Factors of Both Numbers
What are the Factors of 375 and 1,000?
375 = 1, 3, 5, 15, 25, 75, 125, 375
1000 = 1, 2, 4, 5, 8, 10, 20, 25, 40, 50, 100, 125, 200, 250, 500, 1000
You can observe both sets from these two numbers. However, the highest common factor between both the sets is 125.
2. Using the Highest Common Factor
It is possible to reduce the real fraction by using the highest common factor. It requires division of both numbers by the highest common factor.
375 ÷125/ 1000 ÷125 = 3/8
We can reduce the original fractional form to its simplest form in one go by dividing each number in the fraction by 125.
3. Using the Smallest Common Factor
As the numbers get larger, sometimes it becomes difficult to figure out the highest common factor. In this way, we need to find out the least common factor. Therefore, by dividing both numbers in the
fraction by the smallest common factor, we are able to get our answer. Both the approaches are acceptable. Either these are related to the smallest common factor or the highest common factor.
If both numbers have a similar common factor of 5, then divide these numbers by it.
375 ÷ 5/ 1000 ÷ 5 = 75/200
Both the sets have still a similar common factor of 5. Again, you need to divide these two numbers by 5.
75 ÷ 5/ 200 ÷ 5 = 15/40
These two numbers still have a common factor, 5. Therefore, you will divide these numbers by 5 again as follows.
15 ÷ 5/ 40 ÷ 5 = 3/8
The reduction process has finished because there are no more common factors between 3 and 8.
What is the Greatest Common Factor (GCF)?
As the fraction moves ahead, 412/100 is clearly unwieldy. Moreover, we can perceive it substantially smaller. How to find the smallest or simplest form of fraction? Finding the simplest/smallest form
of fraction is what we refer to as reducing the fraction or putting a fraction in its simplest form. However, you need to find the greatest common divisor or greatest common factor (GCF) or (GCD).
The GCF is the greatest number which divides into the denominator and numerator of the fraction. The greatest common factor of 412/100 is 4, if you have the fraction 412/1000 and want to put it in
its simplest form. It would give us 103/250 by reducing this down to its simplest form. We will also take the decimal 0.875 as an example in this regard. If you want to convert it into a fraction,
you need to follow some steps.
• · Count the columns.
• · Move the decimal place over 3 spaces.
• · Put one thousand underneath it.
Most importantly, it provides us 875/1000. Moreover, 125 is the greatest common factor of 875/1000. Consequently, you will get 7/8, if you divide 125 into the numerator and denominator.
How to Find the Greatest Common Factor (GCF)?
You will need to do some calculations if you want to find the GCF of any fraction. There are different methods through which you can find the greatest common factor.
1) Prime Factorization Method
In this way, the Prime Factorization method is the most common in all of them. It requires multiplying out the prime factors present in both numbers. For example, you have a fraction like 18/24.
Therefore, 2 and 3 are the prime factors of 18 (2 × 3 × 3 = 18).
Similarly, 2 and 3 are the prime factors of 24 (2 × 2 × 2 × 3 = 24). You will get 6, if you multiply both these numbers and divide into 18/24 to get ¾. It is easy to enlist the common factors
between two numbers.
2) Division Method
It has been an alternate method to find the greatest common factor (GCF). By this method, it is possible to divide the denominator and numerator of the fraction into smaller and smaller chunk unless
they cannot be divided anymore. The numbers still have common factors which can be divided easily. However, you need to divide them unless these are not being divided anymore.
.375 as a fraction is an example in mathematics. In addition to this, there are different methods through which you can do factorization without facing any difficulty. However, the prime
factorization and division methods are the most common examples of it. People want understanding to find out the greatest common factor in this regard. Keep one thing in mind, there are two common
factors such as GCF and LCF through which you can ensure factorization. You can also figure out .375 as a fraction by using these methods. For more relevant topics keep in touch with https://
An individual needs to follow some steps to convert fractional form into decimal form. However, we have discussed this conversion above with detail. If you get two numbers in the fraction, remove the
fractional point by replacing it with a number of zeros. Now, find out the least and greatest common factor of the fraction to get a right answer. | {"url":"https://thelifonews.com/what-is-375-as-a-fraction-how-to-find-gcf-factorization-division-methods/","timestamp":"2024-11-03T22:11:05Z","content_type":"text/html","content_length":"227898","record_id":"<urn:uuid:0fbc5003-e8d3-43e2-834b-16fe99edad1e>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00359.warc.gz"} |
How does Gibbs free energy change relate to work? | Socratic
How does Gibbs free energy change relate to work?
1 Answer
The $\Delta G$ for a reversible process is equal to the maximum non-PV work that can be performed at constant temperature and pressure on a conservative system.
Consider the differential relationship between the Gibbs' free energy, enthalpy, and entropy:
$\mathrm{dG} = \mathrm{dH} - d \left(T S\right)$
From the definition of enthalpy, $H = U + P V$, where $U$ is the internal energy. As a result,
$\mathrm{dG} = \mathrm{dU} + d \left(P V\right) - d \left(T S\right)$
From the first law of thermodynamics, $\mathrm{dU} = \delta q + \delta w$, where $\delta$ indicates a path function.
$\mathrm{dG} = \delta q + \delta w + P \mathrm{dV} + V \mathrm{dP} - T \mathrm{dS} - S \mathrm{dT}$
Work can be defined as
$\delta w = \delta {w}_{\text{PV" + deltaw_"non-PV}}$,
where $\text{PV}$ work defined from the perspective of the system is $\delta {w}_{\text{PV}} = - P \mathrm{dV}$. Non-PV work can be, e.g. electrical work (think electrochemistry).
From this, assuming that the process performed is reversible (in thermal equilibrium the whole way through), ${q}_{r e v} = T \mathrm{dS}$, so:
#color(green)(dG) = overbrace(cancel(TdS))^(q_(rev)) + deltaw_"non-PV" overbrace(- cancel(PdV))^(w_(rev,"PV")) + cancel(PdV) + VdP - cancel(TdS) - SdT#
$= \textcolor{g r e e n}{- S \mathrm{dT} + V \mathrm{dP} + \delta {w}_{\text{non-PV}}}$
In the end, we find that at constant temperature and pressure, the Gibbs' free energy corresponds to the maximum non-compression and non-expansion work that can be performed:
$\textcolor{b l u e}{\mathrm{dG} = \delta {w}_{\text{non-PV", " const T & P}}}$
Impact of this question
12423 views around the world | {"url":"https://socratic.org/questions/how-does-gibbs-free-energy-change-relate-to-work#614344","timestamp":"2024-11-12T18:17:51Z","content_type":"text/html","content_length":"36312","record_id":"<urn:uuid:969bdf61-31b0-4cec-a2a2-2bf6b65c6189>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00237.warc.gz"} |
i-vector S
i-vector Score Calibration
An i-vector system outputs a raw score specific to the data and parameters used to develop the system. This makes interpreting the score and finding a consistent decision threshold for verification
tasks difficult.
To address these difficulties, researchers developed score normalization and score calibration techniques.
• In score normalization, raw scores are normalized in relation to an 'imposter cohort'. Score normalization occurs before evaluating the detection error tradeoff and can improve the accuracy of a
system and its ability to adapt to new data.
• In score calibration, raw scores are mapped to probabilities, which in turn are used to better understand the system's confidence in decisions.
In this example, you apply score calibration to an i-vector system. To learn about score normalization, see i-vector Score Normalization.
For example purposes, you use cosine similarity scoring (CSS) throughout this example. The interpretability of probabilistic linear discriminant analysis (PLDA) scoring is also improved by
Starting in R2022a, you can use the calibrate method of ivectorSystem to calibrate both CSS and PLDA scoring.
Download i-vector System and Data Set
To download a pretrained i-vector system suitable for speaker recognition, call speakerRecognition. The ivectorSystem returned was trained on the LibriSpeech data set, which consists of
English-language 16 kHz recordings.
ivs = speakerRecognition;
Download the PTDB-TUG data set [1]. The supporting function, loadDataset, downloads the data set and then resamples it from 48 kHz to 16 kHz, which is the sample rate that the i-vector system was
trained on. The loadDataset function returns these audioDatastore objects:
• adsEnroll - Contains files to enroll speakers into the i-vector system.
• adsDev - Contains a large set of files to analyze the detection error tradeoff of the i-vector system, and to spot-check performance.
• adsCalibrate - Contains a set of speakers used to calibrate the i-vector system. The calibration set does not overlap with the enroll and dev sets.
targetSampleRate = ivs.SampleRate;
[adsEnroll,adsDev,adsCalibrate] = loadDataset(targetSampleRate);
Starting parallel pool (parpool) using the 'local' profile ...
Connected to the parallel pool (number of workers: 6).
Score Calibration
In score calibration, you apply a warping function to scores so that they are more easily and consistently interpretable as measures of confidence. Generally, score calibration has no effect on the
performance of a verification system because the mapping is an affine transformation. The two most popular approaches to calibration are Platt scaling and isotonic regression. Isotonic regression
usually results in better performance, but is more prone to overfitting if the calibration data is too small [2].
In this example, you perform calibration using both Platt scaling and isotonic regression, and then compare the calibrations using reliability diagrams.
Extract i-vectors
To properly calibrate a system, you must use data that does not overlap with the evaluation data. Extract i-vectors from the calibration set. You will use these i-vectors to create a calibration
warping function.
calibrationIvecs = ivector(ivs,adsCalibrate);
Score i-vector Pairs
You will score each i-vector against each other i-vector to create a matrix of scores, some of which correspond to target scores where both i-vectors belong to the same speaker, and some of which
correspond to non-target scores where the i-vectors belong to two different speakers. First, create a targets matrix to keep track of which scores are target and which are non-target.
targets = true(size(calibrationIvecs,2),size(calibrationIvecs,2));
calibrationLabels = adsCalibrate.Labels;
for ii = 1:size(calibrationIvecs,2)
targets(:,ii) = ismember(calibrationLabels,calibrationLabels(ii));
Discard the target scores that corresponds to the i-vector scored with itself by setting the corresponding value in the target matrix to NaN. The supporting function, scoreTargets, scores each valid
i-vector pair and returns the results in cell arrays of target and non-target scores.
targets = targets + diag(diag(targets)*nan);
[targetScores,nontargetScores] = scoreTargets(calibrationIvecs,calibrationIvecs,targets);
Use the supporting function, plotScoreDistrubtions, to plot the target and non-target score distributions for the group. The scores range from around 0.64 to 1. In a properly calibrated system,
scores should range from 0 to 1. The job of calibrating a binary classification system is to map the raw score to a score between 0 and 1. The calibrated score should be interpretable as the
probability that the score corresponds to a target pair.
Platt Scaling
Platt scaling (also referred to as Platt calibration or logistic regression) works by fitting a logistic regression model to a classifier's scores.
The supporting function logistic implements a general logistic function defined as
where $\mathit{A}$ and $\mathit{B}$ are the scalar learned parameters.
The supporting function logRegCost defines the cost function for logistic regression as defined in [3]:
$\stackrel{\mathrm{argmin}}{\mathit{A},\mathit{B}}\left\{-\sum _{\mathit{i}}^{}{\mathit{y}}_{\mathit{i}}\mathrm{log}\left({\mathit{p}}_{\mathit{i}}\right)+\left(1-{\mathit{y}}_{\mathit{i}}\right)\
As described in [3], the target values are modified from 0 and 1 to avoid overfitting:
where ${\mathit{y}}_{+}$ is the positive sample value and ${\mathit{N}}_{+}$ is the number of positive samples, and ${\mathit{y}}_{-}$ is the negative sample value and ${\mathit{N}}_{-}$ is the
number of negative samples.
Create a vector of the raw target and non-target scores.
tS = cat(1,targetScores{:});
ntS = cat(1,nontargetScores{:});
x = [tS;ntS];
Create a vector of ideal target probabilities.
yplus = (numel(tS) + 1)./(numel(tS) + 2);
yminus = 1./(numel(ntS) + 2);
y = [yplus*ones(numel(tS),1);yminus*ones(numel(ntS),1)];
Use fminsearch to find the values of A and B that minimize the cost function.
init = [1,1];
AB = fminsearch(@(AB)logRegCost(y,x,AB),init);
Sort the scores in ascending order for visualization purposes.
[x,idx] = sort(x,"ascend");
trueLabel = [ones(numel(tS),1);zeros(numel(ntS),1)];
trueLabel = trueLabel(idx);
Use the supporting function calibrateScores to calibrate the raw scores. Plot the warping function that maps the raw scores to the calibrated scores. Also plot the target scores you are modeling.
calibratedScores = calibrateScores(x,AB);
hold on
grid on
xlabel("Raw Score")
ylabel("Calibrated Score")
hold off
Isotonic Regression
Isotonic regression fits a free-form line to observations with the only condition being that it is non-decreasing (or non-increasing). The supporting function isotonicRegression uses the pool
adjacent violators (PAV) algorithm [3] for isotonic regression.
Call isotonicRegression with the raw score and true labels. The function outputs a struct containing a map between raw scores and calibrated scores.
scoringMap = isotonicRegression(x,trueLabel);
Plot the raw score against the calibrated score. The line is the learned isotonic fit. The circles are the data you are fitting.
hold on
grid on
xlabel("Raw Score")
ylabel("Calibrated Score")
hold off
Reliability Diagram
Reliability diagrams reveal reliability by plotting the mean of the predicted value against the known fraction of positives. A system is reliable if the mean of the predicted value is equal to the
fraction of positives [4].
Reliability must be assessed using a different data set than the one used to calibrate the system. Extract i-vectors from the development data set, adsDev. The development data set has no speaker
overlap with the calibration data set.
devIvecs = ivector(ivs,adsDev);
Create a targets map and score all i-vector pairs.
devLabels = adsDev.Labels;
targets = true(size(devIvecs,2),size(devIvecs,2));
for ii = 1:size(devIvecs,2)
targets(:,ii) = ismember(devLabels,devLabels(ii));
targets = targets + diag(diag(targets)*nan);
[targetScores,nontargetScores] = scoreTargets(devIvecs,devIvecs,targets);
Combine all the scores and labels for faster processing.
ts = cat(1,targetScores{:});
nts = cat(1,nontargetScores{:});
scores = [ts;nts];
trueLabels = [true(numel(ts),1);false(numel(nts),1)];
Calibrate the scores using Platt scaling.
calibratedScoresPlattScaling = calibrateScores(scores,AB);
Calibrate the scores using isotonic regression.
calibratedScoresIsotonicRegression = calibrateScores(scores,scoringMap);
When interpreting the reliability diagram, values below the diagonal indicate that the system is giving higher probability scores than it should be, and values above the diagonal indicate the system
is giving lower probability scores than it should. In both cases, increasing the amount of calibration data, and using calibration data like the target application, should improve performance.
Plot the reliability diagram for the i-vector system calibrated using Platt scaling.
Plot the reliability diagram for the i-vector system calibrated using isotonic regression.
Supporting Functions
Load Dataset
function [adsEnroll,adsDev,adsCalibrate] = loadDataset(targetSampleRate)
%LOADDATASET Load PTDB-TUG data set
% [adsEnroll,adsDev,adsCalibrate] = loadDataset(targetSampleteRate)
% downloads the PTDB-TUG data set, resamples it to the specified target
% sample rate and save the results in your current folder. The function
% then creates and returns three audioDatastore objects. The enrollment set
% includes two utterances per speaker. The calibrate set does not overlap
% with the other data sets.
% Copyright 2021-2022 The MathWorks, Inc.
downloadFolder = matlab.internal.examples.downloadSupportFile("audio","ptdb-tug.zip");
dataFolder = tempdir;
dataset = fullfile(dataFolder,"ptdb-tug");
% Resample the dataset and save to current folder if it doesn't already
% exist.
if ~isfolder(fullfile(pwd,"MIC"))
ads = audioDatastore([fullfile(dataset,"SPEECH DATA","FEMALE","MIC"),fullfile(dataset,"SPEECH DATA","MALE","MIC")], ...
IncludeSubfolders=true, ...
FileExtensions=".wav", ...
reduceDataset = false;
if reduceDataset
ads = splitEachLabel(ads,10);
adsTransform = transform(ads,@(x,y)fileResampler(x,y,targetSampleRate),IncludeInfo=true);
% Create a datastore that points to the resampled dataset. Use the folder
% names as the labels.
ads = audioDatastore(fullfile(pwd,"MIC"),IncludeSubfolders=true,LabelSource="foldernames");
% Split the data set into enrollment, development, and calibration sets.
calibrationLabels = categorical(["M01","M03","M05","M7","M9","F01","F03","F05","F07","F09"]);
adsCalibrate = subset(ads,ismember(ads.Labels,calibrationLabels));
adsDev = subset(ads,~ismember(ads.Labels,calibrationLabels));
numToEnroll = 2;
[adsEnroll,adsDev] = splitEachLabel(adsDev,numToEnroll);
File Resampler
function [audioOut,adsInfo] = fileResampler(audioIn,adsInfo,targetSampleRate)
%FILERESAMPLER Resample audio files
% [audioOut,adsInfo] = fileResampler(audioIn,adsInfo,targetSampleRate)
% resamples the input audio to the target sample rate and updates the info
% passed through the datastore.
% Copyright 2021 The MathWorks, Inc.
audioIn (:,1) {mustBeA(audioIn,["single","double"])}
adsInfo (1,1) {mustBeA(adsInfo,"struct")}
targetSampleRate (1,1) {mustBeNumeric,mustBePositive}
% Isolate the original sample rate
originalSampleRate = adsInfo.SampleRate;
% Resample if necessary
if originalSampleRate ~= targetSampleRate
audioOut = resample(audioIn,targetSampleRate,originalSampleRate);
amax = max(abs(audioOut));
if max(amax>1)
audioOut = audioOut./amax;
% Update the info passed through the datastore
adsInfo.SampleRate = targetSampleRate;
Score Targets and Non-Targets
function [targetScores,nontargetScores] = scoreTargets(e,t,targetMap,nvargs)
%SCORETARGETS Score i-vector pairs
% [targetScores,nontargetScores] = scoreTargets(e,t,targetMap) exhaustively
% scores i-vectors in e against i-vectors in t. Specify e as an M-by-N
% matrix, where M corresponds to the i-vector dimension, and N corresponds
% to the number of i-vectors in e. Specify t as an M-by-P matrix, where P
% corresponds to the number of i-vectors in t. Specify targetMap as a
% P-by-N numeric matrix that maps which i-vectors in e and t are target
% pairs (derived from the same speaker) and which i-vectors in e and t are
% non-target pairs (derived from different speakers). Values in targetMap
% set to NaN are discarded. The outputs, targetScores and nontargetScores,
% are N-element cell arrays. Each cell contains a vector of scores between
% the i-vector in e and either all the targets or nontargets in t.
% [targetScores,nontargetScores] =
% scoreTargets(e,t,targetMap,NormFactorsSe=NFSe,NormFactorsSt=NFSt)
% normalizes the scores by the specified normalization statistics contained
% in structs NFSe and NFSt. If unspecified, no normalization is applied.
% Copyright 2021 The MathWorks, Inc.
e (:,:) {mustBeA(e,["single","double"])}
t (:,:) {mustBeA(t,["single","double"])}
targetMap (:,:)
nvargs.NormFactorsSe = [];
nvargs.NormFactorsSt = [];
% Score the i-vector pairs
scores = cosineSimilarityScore(e,t);
% Apply as-norm1 if normalization factors supplied.
if ~isempty(nvargs.NormFactorsSe) && ~isempty(nvargs.NormFactorsSt)
scores = 0.5*( (scores - nvargs.NormFactorsSe.mu)./nvargs.NormFactorsSe.std + (scores - nvargs.NormFactorsSt.mu')./nvargs.NormFactorsSt.std' );
% Separate the scores into targets and non-targets
targetScores = cell(size(targetMap,2),1);
nontargetScores = cell(size(targetMap,2),1);
removeIndex = isnan(targetMap);
for ii = 1:size(targetMap,2)
localScores = scores(:,ii);
localMap = targetMap(:,ii);
localScores(removeIndex(:,ii)) = [];
localMap(removeIndex(:,ii)) = [];
targetScores{ii} = localScores(logical(localMap));
nontargetScores{ii} = localScores(~localMap);
Cosine Similarity Score (CSS)
function scores = cosineSimilarityScore(a,b)
%COSINESIMILARITYSCORE Cosine similarity score
% scores = cosineSimilarityScore(a,b) scores matrix of i-vectors, a,
% against matrix of i-vectors b. Specify a as an M-by-N matrix of
% i-vectors. Specify b as an M-by-P matrix of i-vectors. scores is returned
% as a P-by-N matrix, where columns corresponds the i-vectors in a
% and rows corresponds to the i-vectors in b and the elements of the array
% are the cosine similarity scores between them.
% Copyright 2021 The MathWorks, Inc.
a (:,:) {mustBeA(a,["single","double"])}
b (:,:) {mustBeA(b,["single","double"])}
scores = squeeze(sum(a.*reshape(b,size(b,1),1,[]),1)./(vecnorm(a).*reshape(vecnorm(b),1,1,[])));
scores = scores';
Plot Score Distributions
function plotScoreDistributions(targetScores,nontargetScores,nvargs)
%PLOTSCOREDISTRIBUTIONS Plot target and non-target score distributions
% plotScoreDistribution(targetScores,nontargetScores) plots empirical
% estimations of the distribution for target scores and nontarget scores.
% Specify targetScores and nontargetScores as cell arrays where each
% element contains a vector of speaker-specific scores.
% plotScoreDistrubtions(targetScores,nontargetScores,Analyze=ANALYZE)
% specifies the scope for analysis as either "label" or "group". If ANALYZE
% is set to "label", then a score distribution plot is created for each
% label. If ANALYZE is set to "group", then a score distribution plot is
% created for the entire group by combining scores across speakers. If
% unspecified, ANALYZE defaults to "group".
% Copyright 2021 The MathWorks, Inc.
targetScores (1,:) cell
nontargetScores (1,:) cell
nvargs.Analyze (1,:) char {mustBeMember(nvargs.Analyze,["label","group"])} = "group"
% Combine all scores to determine good bins for analyzing both the target
% and non-target scores together.
allScores = cat(1,targetScores{:},nontargetScores{:});
[~,edges] = histcounts(allScores);
% Determine the center of each bin for plotting purposes.
centers = movmedian(edges(:),2,Endpoints="discard");
if strcmpi(nvargs.Analyze,"group")
% Plot the score distributions for the group.
targetScoresBinCounts = histcounts(cat(1,targetScores{:}),edges);
targetScoresBinProb = targetScoresBinCounts(:)./sum(targetScoresBinCounts);
nontargetScoresBinCounts = histcounts(cat(1,nontargetScores{:}),edges);
nontargetScoresBinProb = nontargetScoresBinCounts(:)./sum(nontargetScoresBinCounts);
title("Score Distributions")
axis tight
% Create a tiled layout and plot the score distributions for each speaker.
N = numel(targetScores);
for ii = 1:N
targetScoresBinCounts = histcounts(targetScores{ii},edges);
targetScoresBinProb = targetScoresBinCounts(:)./sum(targetScoresBinCounts);
nontargetScoresBinCounts = histcounts(nontargetScores{ii},edges);
nontargetScoresBinProb = nontargetScoresBinCounts(:)./sum(nontargetScoresBinCounts);
hold on
title("Score Distribution for Speaker " + string(ii))
axis tight
Calibrate Scores
function y = calibrateScores(score,scoreMapping)
%CALIBRATESCORES Calibrate scores
% y = calibrateScores(score,scoreMapping) maps the raw scores to calibrated
% scores, y, using the score mappinging information in scoreMapping.
% Specify score as a vector or matrix of raw scores. Specify score mapping
% as either struct or a two-element vector. If scoreMapping is specified as
% a struct, then it should have two fields: Raw and Calibrated, that
% together form a score mapping. If scoreMapping is specified as a vector,
% then the elements are used as the coefficients in the logistic function.
% y is returned as vector or matrix the same size as the raw scores.
% Copyright 2021 The MathWorks, Inc.
score (:,:) {mustBeA(score,["single","double"])}
if isstruct(scoreMapping)
% Calibration using isotonic regression
rawScore = scoreMapping.Raw;
interpretedScore = scoreMapping.Calibrated;
n = numel(score);
% Find the index of the raw score in the mapping closest to the score provided.
idx = zeros(n,1);
for ii = 1:n
[~,idx(ii)] = min(abs(score(ii)-rawScore));
% Get the calibrated score.
y = interpretedScore(idx);
% Calibration using logistic regression
y = logistic(score,scoreMapping);
Reliability Diagram
function reliabilityDiagram(targets,predictions,numBins)
%RELIABILITYDIAGRAM Plot reliability diagram
% reliabilityDiagram(targets,predictions) plots a reliability diagram for
% targets and predictions. Specify targets an M-by-1 logical vector.
% Specify predictions as an M-by-1 numeric vector.
% reliabilityDiagram(targets,predictions,numBins) specifies the number of
% bins for the reliability diagram. If unspecified, numBins defaults to 10.
% Copyright 2021 The MathWorks, Inc.
targets (:,1) {mustBeA(targets,"logical")}
predictions (:,1) {mustBeA(predictions,["single","double"])}
numBins (1,1) {mustBePositive,mustBeInteger} = 10;
% Bin the predictions into the requested number of bins. Count the number of
% predictions per bin.
[predictionsPerBin,~,predictionsInBin] = histcounts(predictions,numBins);
% Determine the mean of the predictions in the bin.
meanPredictions = accumarray(predictionsInBin,predictions)./predictionsPerBin(:);
% Determine the mean of the targets per bin. This is the fraction of
% positives--the number of targets in the bin over the total number of
% predictions in the bin.
meanTargets = accumarray(predictionsInBin,targets)./predictionsPerBin(:);
hold on
legend("Ideal Calibration",Location="best")
xlabel("Mean Predicted Value")
ylabel("Fraction of Positives")
title("Reliability Diagram")
grid on
hold off
Logistic Regression Cost Function
function cost = logRegCost(y,f,iparams)
%LOGREGCOST Logistic regression cost
% cost = logRegCost(y,f,iparams) calculates the cost of the logistic
% function given truth y, prediction f, and logistic params iparams.
% Specify y and f as column vectors. Specify iparams as a two-element row
% vector in the form [A,B], where A and B are the model parameters:
% 1
% p(x) = ------------------
% 1 + e^(-A*f - B)
% Copyright 2021 The MathWorks, Inc.
y (:,1) {mustBeA(y,["single","double"])}
f (:,1) {mustBeA(f,["single","double"])}
iparams (1,2) {mustBeA(iparams,["single","double"])}
p = logistic(f,iparams);
cost = -sum(y.*log(p) + (1-y).*log(1-p));
Logistic Function
function p = logistic(f,iparams)
%LOGISTIC Logistic function
% p = logistic(f,iparams) applies the general logistic function to input f
% with parameters iparams. Specify f as a numeric array. Specify iparams as
% a two-element vector. p is returned as the same size as f.
% Copyright 2021 The MathWorks, Inc.
iparams = [1 0];
p = 1./(1+exp(-iparams(1).*f - iparams(2)));
Isotonic Regression
function scoreMapping = isotonicRegression(x,y)
%ISOTONICREGRESSION Isotonic regression
% scoreMapping = isotonicRegression(x,y) fits a line yhat to data y under
% the monotonicity constraint that x(i)>x(j) -> yhat(i)>=yhat(j). That is,
% the values in yhat are monotontically non-decreasing with respect to x.
% The output, scoreMapping, is a struct containing the changepoints of yhat
% and the corresponding raw score in x.
% Copyright 2021, The MathWorks, Inc.
N = numel(x);
% Sort points in ascending order of x.
[x,idx] = sort(x(:),"ascend");
y = y(idx);
% Initialize fitted values to the given values.
m = y;
% Initialize blocks, one per point. These will merge and the number of
% blocks will reduce as the algorithm proceeds.
blockMap = 1:N;
w = ones(size(m));
while true
diffs = diff(m);
if all(diffs >= 0)
% If all blocks are monotonic, end the loop.
% Find all positive changepoints. These are the beginnings of each
% block.
blockStartIndex = diffs>0;
% Create group indices for each unique block.
blockIndices = cumsum([1;blockStartIndex]);
% Calculate the mean of each block and update the weights for the
% blocks. We're merging all the points in the blocks here.
m = accumarray(blockIndices,w.*m);
w = accumarray(blockIndices,w);
m = m ./ w;
% Map which block corresponds to which index.
blockMap = blockIndices(blockMap);
% Broadcast merged blocks out to original points.
m = m(blockMap);
% Find the changepoints
changepoints = find(diff(m)>0);
changepoints = [changepoints;changepoints+1];
changepoints = sort(changepoints);
% Remove all points that aren't changepoints.
a = m(changepoints);
b = x(changepoints);
scoreMapping = struct(Raw=b,Calibrated=a);
[1] G. Pirker, M. Wohlmayr, S. Petrik, and F. Pernkopf, "A Pitch Tracking Corpus with Evaluation on Multipitch Tracking Scenario", Interspeech, pp. 1509-1512, 2011.
[2] van Leeuwen, David A., and Niko Brummer. "An Introduction to Application-Independent Evaluation of Speaker Recognition Systems." Lecture Notes in Computer Science, 2007, 330–53.
[3] Niculescu-Mizil, A., & Caruana, R. (2005). Predicting good probabilities with supervised learning. Proceedings of the 22nd International Conference on Machine Learning - ICML '05. doi:10.1145/
[4] Brocker, Jochen, and Leonard A. Smith. “Increasing the Reliability of Reliability Diagrams.” Weather and Forecasting 22, no. 3 (2007): 651–61. https://doi.org/10.1175/waf993.1. | {"url":"https://de.mathworks.com/help/audio/ug/i-vector-score-calibration.html","timestamp":"2024-11-06T07:16:59Z","content_type":"text/html","content_length":"118485","record_id":"<urn:uuid:e9f1fbb7-bab2-4814-b540-c1078c655808>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00187.warc.gz"} |
June 2016 LSAT Question 7 Explanation
Mollie must be assigned to 1922 if which one of the following is true?
I really struggled with this game. How does Yoshio being assigned to 1921 ensure that Mollie is in 1922? The two don't seem to be related. thanks!
Great question. Let's look at the setup first.
We have a 4-slot game board with 2 pieces that are out
_ _ _ _ | _ _
Our pieces are L, M, O, R, T, and Y.
Rule 1 says L or T has to be in 1923
_ _ L/T _ | _ _
Rule 2 says if M is in, she's in 1921 or 1922
M in - >21 or 22
Rule 3 says if T is in, then R is in.
T - >R
Rule 4 says if R is in, then O has to be in in the year before R
R - >OR
We can also combine rules 3 and 4 together
T - >R - >O and OR
This is an important inference because if we try to put O out, that
means both R and T would also have to go out. However, we only have 2
spots in the out group. so this means that O must go in.
If R goes out, we know that T goes out, so this is a good place to
split up the game board.
_ _ L/T _ | _ _
In this game board, we're putting R in, so OR have to go together. The
only spot for them to do so is in 1 and 2.
O R L/T _ | _ _
This also means that M is out since 1921 and 1922 are occupied by O and R.
O R L/T _ | M _
Now let's look at our other board.
_ _ L/T _ | _ R
If R is out, then T is out.
_ _ L _ | T R
Now let's look at the question. Mollie must be assigned to 1922 if
which one of the following is true?
We know M is in the game, so we're only concerned with our second board.
_ _ L _ | T R
(A) Louis is assigned to 1924.
The only scenario in which this occurs is the first board, and M is
out in that board, so this answer is automatically out.
(B) Onyx is assigned to 1921.
(B) doesn't work because in our first game board, we have O in 1921 and M out.
O R L/T _ | M _
(C) Onyx is assigned to 1924.
If O is in 1924, M could still be in 1921 or 1922 in our second game
board, so (C) is out.
(D) Tiffany is assigned to 1923.
The first game board is the only board where T could be in 1923, and
in that board, M is out, so (D) is out.
(E) Yoshio is assigned to 1921.
Our first board has O in 1921, so we're looking at the second game board.
_ _ L _ | T R
If Y is in 1921, then M has to go in 1922 because T and R have already
filled up the out group and we know that if M is in, it has to go in
1921 or 1922. Since 1921 is full, it has to be in 1922. Thus, (E) is
the correct answer.
Does this make sense? Let us know if you have any more questions!
Maybe I am just missing something obvious but why is (b) also not possible? It would seem any variable that doesn't have an extra rule against it taking 1921 would force M into 1922? Why does it have
to be Y vs O? Am I missing a rule that say O can't go in spot 1 or 4 or hence an ordering of:
OMLY or YMLO. Both seem valid ?
Hi @ElliottF,
Thanks for the question! So the question here is going to be about what, if true, will force Mollie to be in 1922. Remember, the second rule only tells us that IF Mollie is in the project, she has to
be in 1921 or 1922. So someone could be in 1921, and Mollie’s just not in the project. And in that case, she wouldn’t have to be assigned to 1922.
So let’s look at (B). It tells us that O is assigned to 1921. So let’s put O there, and just not put M in the game! Then we could have something like
And that doesn’t violate any rules, and also doesn’t assign M to 1922. So O can be in 1921 and M not in 1922, so (B) is wrong. The answer choice has to both force M to be in the game and force M to
be in 1922 instead of 1921, and that’s what (E) does.
Hope this helps! Feel free to ask any other questions that you might have. | {"url":"https://testmaxprep.com/lsat/community/100002561-question","timestamp":"2024-11-06T21:34:16Z","content_type":"text/html","content_length":"70206","record_id":"<urn:uuid:ae76c954-1032-40bd-b448-8421522fcfc5>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00494.warc.gz"} |
Problem Centered Learning - Connected Mathematics Project
Problem Centered Learning
Learning in a Problem Centered Curriculum
Over the past three to four decades, a growing body of knowledge from the cognitive sciences has supported the notion that students develop their own understanding from their experiences with
mathematics. The National Research Council, among other groups, has drawn attention to research that suggests that "learning is a complex cognitive process that builds on prior knowledge and requires
active engagement with new situations." (See How People Learn.) "The process of inquiry, not merely giving instruction, is the very heart of what teachers do." (3)
Rationale for a Problem Centered Curriculum
• CMP is problem-centered. This means that important mathematical ideas are embedded in engaging problems. Students develop understanding and skill as they explore a coherent set of problems,
individually, in a group, or with the class. "Effective instruction models good thinking, provides hints, and prompts students who can not get it on their own." (2) Inquiry, reflection,
meaningful problems in a variety of contexts, and sense making, are all elements of the CMP program.
• Students' perceptions about a discipline come from the tasks or problems in which they are asked to engage. For example, if students in a geometry course are asked to memorize definitions, they
think geometry is about memorizing definitions. If students spend a majority of their mathematics time practicing paper-and-pencil computations, they come to believe that mathematics is about
calculating answers to arithmetic problems as quickly as possible. They may become adept at quickly performing specific types of computations, but they may not be able to apply these skills to
other situations or to recognize problems that call for these skills. If the purpose of studying mathematics is to be able to solve a variety of problems, then students need to spend significant
time solving problems that require thinking, planning, reasoning, computing and evaluating.
• CMP places important mathematics in problems in context. Research evidence from the cognitive sciences supports the theory that students can make sense of mathematics if the concepts are embedded
within a context or problem. If time is spent exploring interesting mathematical situations, reflecting on solution methods, comparing methods, and examining why methods work, then students are
likely to build more robust understanding of mathematical concepts and procedures.
• A problem-centered curriculum not only helps students make sense of the mathematics, it appears to also help them process the mathematics in a retrievable way.
• Teachers of CMP report that students in succeeding grades remember and refer to a concept, technique, or strategy, by the name of the problem in which they encountered the idea.
• Results from the cognitive sciences also suggest that learning is enhanced if it is connected to prior knowledge, and is more likely to be retained and applied appropriately to future learning.
• CMP Units build on each other. Concepts developed in one unit are deliberately connected to prior investigations and skills; and problems in future units further develop or refine strategies.
The Parent/ Guardian Role
As parents or guardians talk to their children about what they have learned in class they become an active part of the learning process. They are some of the knowledgeable experts in their children's
environment. Their expertise may be in the mathematical ideas, or in the learning process itself. They can provide the help their children need with the homework, without taking away the gains to be
made from a student's individual work. They can encourage their students to reflect on what was recently learned. When they ask questions and allow their children to explain concepts they are part of
the metacognitive process (reflecting on one's understanding and thinking) that researchers tell us enhances achievement and develops the ability to learn independently.
In CMP important mathematical ideas are identified. Each idea is studied in depth within a unit and then used throughout the remaining units. These mathematical ideas are embedded in the context of
interesting problems. As students explore a series of connected problems, they develop understanding of the embedded ideas and with the aid of the teacher, abstract powerful mathematical ideas, and
problem-solving strategies. CMP students are developing mathematical habits of mind: solving problems, reflecting on solution methods, examining why the methods work, comparing methods, generalizing
methods, and relating methods to those used in previous situations. Every problem in Connected Mathematics satisfies all of the following criteria:
• It contains important, useful mathematics.
• It requires higher-level thinking and problem solving.
• It contributes to students' conceptual development.
• It connects to other important mathematical ideas.
• It promotes the skillful use of mathematics.
• Students can approach it in multiple ways, using different solution strategies.
• It provides an opportunity to practice important skills.
• It engages students and encourages discourse.
• It has various solution methods or allows different decisions or positions to be taken and defended.
• It creates an opportunity for the teacher to assess what students are learning and where they may be experiencing difficulty.
National Research Council. How People Learn: Brain, Mind, Experience, and School. Committee on Developments in the Science of Learning and the Committee on Learning Research and Educational Practice.
J Bransford, A. Brown, R. Cocking, S. Donovan, and J. Pellegrino (eds.).Washington, DC: National Academy Press 2000.
National Research Council. How People Learn: Bridging Research and Practice. J Bransford, A. Brown, R. Cocking (eds.).Washington, DC: National Academy Press 2000.,
U.S. Department of Education. Before It's Too Late: A Report to the Nation from the National Commission on Mathematics and Science Teaching for the 21st Century. Washington, DC.
Garafolo, Joe and Frank K Lester, Jr. "Metacognition, Cognitive Monitoring, and Mathematical Performance." Journal for Research in Mathematics Education 16 (May 1985): 163-76.
Hiebert, James. "Relationships between Research and the NCTM Standards." Journal for Research in Mathematics Education 30 (January 1999): 3 - 19.
Silver, Edward A., Jeremy Kilpatrick, and Beth G. Schlesinger. Thinking Through Mathematics: Fostering Enquiry and Communication in Mathematics Classrooms. New York: College Entrance Examination
Board, 1990.
Silver, Edward A., and Margaret S. Smith. "Implementing Reform in the Mathematics Classroom: Creating Mathematical Discourse Communities." In Reform in Math and Science Education: Issues for
Teachers. Columbus, Ohio: Eisenhower National Clearing House for Mathematics and Science Education, 1997. CD-ROM.
Stigler, James W., and James Heibert. The Teaching Gap: Best Ideas from the World's Teachers for Improving Education in the Classroom. New York: The Free Press, 1999.
Kilpatrick, Jeremy, and Martin, Gary W., and Schifter, Deborah. Ed. A Research Companion to Principles and Standards for School Mathematics. National Council of Teachers of Mathematics, 2003. | {"url":"https://connectedmath.msu.edu/families/problem-centered-learning.aspx","timestamp":"2024-11-13T08:25:32Z","content_type":"text/html","content_length":"58484","record_id":"<urn:uuid:92048d45-aabd-4326-af78-152218cb2e65>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00507.warc.gz"} |
Let f:R→[0,∞) be such that limx→5f(x) exists and limx→5∣x−5∣... | Filo
Let be such that exists and . Then, is equal to (a) 3 (b) 0 (c) 1 (d) 2
Not the question you're searching for?
+ Ask your question
Exp. (a) Given, exists and But Range of
Was this solution helpful?
Found 3 tutors discussing this question
Discuss this question LIVE
9 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Questions from JEE Mains 2011 - PYQs
Practice questions from Arihant JEE Main Chapterwise Solutions Mathematics (2019-2002) (Arihant)
View more
Practice questions from Limits and Derivatives in the same exam
Practice more questions from Limits and Derivatives
View more
Practice questions on similar concepts asked by Filo students
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text Let be such that exists and . Then, is equal to (a) 3 (b) 0 (c) 1 (d) 2
Updated On Mar 19, 2023
Topic Limits and Derivatives
Subject Mathematics
Class Class 11
Answer Type Text solution:1 Video solution: 1
Upvotes 227
Avg. Video Duration 10 min | {"url":"https://askfilo.com/math-question-answers/let-f-r-rightarrow0-infty-be-such-that-lim-_x-rightarrow-5-fx-exists-and-lim-_x","timestamp":"2024-11-02T12:36:23Z","content_type":"text/html","content_length":"708159","record_id":"<urn:uuid:45318d68-ed89-471a-8797-1c9c4e1b901c>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00071.warc.gz"} |
C1. Mechanical non-lin... | AFGC Wiki
C1. Mechanical non-linear problems
C1. Mechanical non-linear problems
1. Description of possible non-linearities
The non-linearity of a mechanical problem comes from the fact that the coefficients of the equilibrium equation depend on the displacements of the solid itself in equilibrium. In other words, the
equilibrium equation is generally implicit.
There are several non-linearity categories for mechanical static problems:
• Material non-linearities: in cases where the constitutive law is not linear or that the response of the material depends on the loading history. In other words, stress is not a linear function of
strains. The most common case in civil engineering is one of the material loaded beyond their elastic capacity, which then develops elasto-plastic behaviors. This behavior is characterized by a
dependency between the stiffness of the material and its stress state.
• Geometric nonlinearities: for cases where the structure is submitted to big displacements or big strains. In the first case, one can no longer write the problem neglecting the changes in the
geometry of the structure. In the second case, one can no longer approach the strains simply as the gradient of the displacements.
• Boundary condition non-linearities: in cases where a structure is progressively loaded and there is potential contact between two bodies with follower forces. These types of non-linearities also
appear when the construction phasing or the assembly of a bridge’s deck are simulated, when digging a gallery, constructing an embankment, etc.
All the above non-linearities can be coupled if the algorithm allows it, but the resolution of the problem becomes more complex.
2. Principle of resolution of a non-linear problem: Newton method
When solving a finite element problem, one looks for the displacement field u, such that the inner forces L[int] are equal to the external forces L[ext]:
Generally, to solve the non-linear static problem, an incremental algorithm is used. For that matter, the problem is parameterized in terms of t (with t representing a pseudo-time, unlike the t
parameter used in dynamics). This parameter is used to index the successive load-steps applied on the structure. More precisely, it consists of searching for the equilibrium state corresponding to
the successive load-steps F[1], F[2], …
This separation leads to solving a series of quasi-linear problems as shown in the figure below and to determine the state of the structure at the time-step t (displacements, strains, stresses)
knowing the solution at the state t-1. The greater the number of load-steps, the better the precision.
Principle of parametrisation in function of t
At each increment t[i] the discrete problem is K[i] x q[i] = F[i] where q[i] is the unknown displacement vector under the applied imposed loading F[i]. While in the linear case seen in chapter 1 the
K matrix was explicit, when the problem is non-linear, K[i] is a matrix with its terms depending implicitly on the value of q[i]. So, q[i] cannot be determined directly by computing the inverse of
the matrix K.
The most used method to solve this non-linear equation is to use a Newton-type algorithm. The idea is to build a good approximation for the equation’s solution
considering its first-order Taylor expansion
One must start from an initial point (close enough to the solution) and then compute by iterations
At each iteration, one should evaluate the residual vector F(qk) until it exceeds (in absolute value) the value arbitrarily close to zero. This convergence criterion must be chosen with care,
respecting the standard used by the calculation code (see section 3.3 for more details).
Note: With the Newton method, at each iteration, one should compute the tangent matrix at the considered point:
The computational cost of this matrix can be time-consuming. If using this matrix allows having a quadratic convergence (so, in fewer iterations), it is not essential to use this matrix. Other
strategies can be adopted to estimate this matrix, namely the quasi-Newton methods. It is conceivable to use the tangent matrix without updating it at each iteration, but also to use the elastic
matrix (figure b) or the secant matrix in the case of a damage model. An illustration of the successive iterations according to the used matrix is shown below.
Illustration of the Newton or quasi-Newton method (elastic matrix)
In general, using the tangent matrix allows a faster convergence (in fewer iterations) but the alternatives might be more effective or more robust according to the situation.
As the method is iterative, the process should be stopped when the stop criterion is reached, in other words, when it is verified that a given value (or several values) becomes negligible. The global
algorithm can be written as follows:
by defining the increments, i indexing the Newton iterations and ε being a positive value close to zero.
Note: The Newton algorithm is used to solve the equilibrium at each time step. It can also be used to find the stresses in each Gauss point (at all iterations of the Newton problem on the global
scale) when the constitutive law requires it.
No Comments | {"url":"https://wiki.afgc.asso.fr/books/finite-element-modeling-and-computations-in-the-field-of-civil-engineering/page/c1-mechanical-non-linear-problems","timestamp":"2024-11-12T03:25:40Z","content_type":"text/html","content_length":"144265","record_id":"<urn:uuid:1e175190-2f19-4227-ae69-3d2e34c8bd05>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00817.warc.gz"} |
Introduction to Diametral Pitch in context of diametral pitch formula
26 Aug 2024
Title: An Introduction to Diametral Pitch: Understanding the Formula and its Significance
Abstract: Diametral pitch is a fundamental concept in gear design, measuring the number of teeth on a gear per unit of circumference. This article provides an introduction to diametral pitch,
including its definition, formula, and significance in the context of gear manufacturing. The diametral pitch formula will be presented in both BODMAS (Brackets, Orders, Division, Multiplication,
Addition, Subtraction) and ASCII formats.
Introduction: Gears are a crucial component in many mechanical systems, such as transmissions, engines, and machinery. The performance and efficiency of gears depend on various factors, including the
number of teeth, pitch, and material properties. Diametral pitch is one of the most important parameters in gear design, as it affects the overall performance and durability of the gear.
Definition: Diametral pitch (DP) is defined as the number of teeth per unit of circumference on a gear. It is typically measured in terms of teeth per inch (TPI) or teeth per millimeter (TPM).
Formula: The diametral pitch formula can be expressed in BODMAS format as:
DP = 25.4 / (π * d)
Where: DP = Diametral Pitch d = Diameter of the gear
In ASCII format, the formula is:
Significance: Diametral pitch plays a crucial role in gear design and manufacturing. A higher diametral pitch indicates a finer tooth pitch, which can improve the efficiency and smoothness of the
gear operation. Conversely, a lower diametral pitch results in a coarser tooth pitch, which may lead to increased wear and tear.
Applications: Diametral pitch is used in various applications, including:
1. Gear design: Diametral pitch is an essential parameter in designing gears for specific applications.
2. Gear manufacturing: The diametral pitch formula helps manufacturers determine the correct tooth pitch for gear production.
3. Performance analysis: Diametral pitch can be used to analyze and predict the performance of gears under different operating conditions.
Conclusion: In conclusion, diametral pitch is a fundamental concept in gear design and manufacturing. Understanding the formula and its significance is crucial for designing and manufacturing
efficient and durable gears. This article has provided an introduction to diametral pitch, including its definition, formula, and applications.
• ANSI/AGMA 2001-D04 (2013). American National Standard for Gear Tooth Geometry.
• ISO 1328-1 (2014). Gears - Part 1: Design and calculation of gears.
• Peterson, J. B. (1994). Gear design and manufacturing. McGraw-Hill Education.
Note: The formula is presented in both BODMAS and ASCII formats for clarity and ease of understanding.
Related articles for ‘diametral pitch formula’ :
Calculators for ‘diametral pitch formula’ | {"url":"https://blog.truegeometry.com/tutorials/education/5d5b9a14a7832621faa978dd49ac2665/JSON_TO_ARTCL_Introduction_to_Diametral_Pitch_in_context_of_diametral_pitch_form.html","timestamp":"2024-11-02T11:21:32Z","content_type":"text/html","content_length":"17183","record_id":"<urn:uuid:68bba532-260d-499b-848c-a17054532c26>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00449.warc.gz"} |
Assignment operators
SystemVerilog assignment operator includes the C assignment operators and special bitwise assignment operators: +=, -=, *=, /=, %=, &=, |=, ^=, <<=, >>=, <<<=, and >>>=. An assignment operator is
semantically equivalent to a blocking assignment, with the exception that any left-hand side index expression is only evaluated once. For example:
a[i]+=2; // same as a[i] = a[i] +2;
In SystemVerilog, an expression can include a blocking assignment, provided it does not have a timing control, Such an assignment must be enclosed in parentheses to avoid common mistakes such as
using a=b for a==b, or a|=b for a!=b.
The semantics of such an assignment expression are those of a function that evaluates the right-hand side, casts the right-hand side to the left-hand data type, stacks it, updates the left-hand side,
and returns the stacked value. The type returned is the type of the left-hand side data type. If the left-hand side is a concatenation, the type returned shall be an unsigned integral value whose bit
length is the sum of the length of its operands.
SystemVerilog includes increment and decrement assignment operators ++i, –i, i++, and i–. These do not need parentheses when used in expressions. These increment and decrement assignment operators
behave as blocking assignments.
The ordering of assignment operations relative to any other operation within an expression is undefined. An implementation can warn whenever a variable is both written and read or written within an
integral expression or in other contexts where an implementation cannot guarantee the order of evaluation. In the following example:
i = 10;
j = i++ + (i = i - 1);
After execution, the value of j can be 18, 19, or 20 depending upon the relative ordering of the increment and the assignment statements.
Operations on logic and bit types
When a binary operator has one operand of type bit and another of type logic, the result is of type logic. If one operand is of type int and the other of type integer, the result is of type integer.
The operators != and == return an X if either operand contains an X or a Z, as in Verilog-2001. This is converted to a 0 if the result is converted to a type bit.
The unary reduction operators (&, ~&, |, ~|, ^, ~^) can be applied to any integer expression (including packed arrays). The operators shall return a single value of type logic if the packed type is
four-valued, and of type bit if the packed type is two-valued.
int i;
bit b = &i;
integer j;
logic c = &j;
Wild equality and wild inequality
SystemVerilog wild-card comparison operators, as described below.
=?= a =?= b a equals b, X, and Z values act as wild cards
!?= a !?= b a not equal b, X, and Z values act as wild cards
The wild equality operator (=?=) and inequality operator (!?=) treat X and Z values in a given bit position as a wildcard. A wildcard bit matches any bit value (0, 1, Z, or X) in the value of the
expression being compared against it.
These operators compare operands bit for bit and return a 1-bit self-determined result. If the operands to the wild-card equality/inequality are of unequal bit length, the operands are extended in
the same manner as for the case equality/inequality operators. If the relation is true, the operator yields a 1. If the relation is false, it yields a 0.
The three types of equality (and inequality) operators in SystemVerilog behave differently when their operands contain unknown values (X or Z). The == and != operators result in X if any of their
operands contains an X or Z. The === and !== check the 4-state explicitly, therefore, X and Z values shall either match or mismatch, never resulting in X. The =?= and !?= operators treat X or Z as
wild cards that match any value, thus, they too never result in X.
Real operators
Operands of type shortreal have the same operation restrictions as Verilog real operands. The unary operators ++ and — can have operands of type real and shortreal (the increment or decrement is by
1.0). The assignment operators +=, -=, *=, /= can also have operands of type real and shortreal.
If any operand, except before the ? in the ternary operator, is real, the result is real. Otherwise, if any operand, except before the ? in the ternary operator, is shortreal, the result is shortreal
The number of bits of an expression is determined by the operands and the context, following the same rules as Verilog. In SystemVerilog, casting can be used to set the size context of an
intermediate value.
With Verilog, tools can issue a warning when the left and right-hand sides of an assignment are different sizes. Using the SystemVerilog size casting, these warnings can be prevented.
The rules for determining the signedness of SystemVerilog expression types shall be the same as those for Verilog. A shortreal converted to an integer by type coercion shall be signed. | {"url":"https://vlsisource.com/tag/operators/","timestamp":"2024-11-03T15:29:00Z","content_type":"text/html","content_length":"55815","record_id":"<urn:uuid:56e7445f-6342-4daa-9ed1-b2e9e036a328>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00594.warc.gz"} |
One Sided Derivative
Derivatives >
A one sided derivative is either a derivative from the left or a derivative from the right.
• Derivative from the left: You approach a point from the left direction of the number line.
• Derivative from the right: You approach a point from the right direction of the number line.
These are particularly useful at endpoints, where a function stops abruptly and doesn’t go beyond a certain point.
Two endpoints A and B on a line segment.
Any function that is differentiable at the end of its domain is called one sided differentiable (Reinholz, n.d.).
Note though, that if both the right and left hand derivatives are equal, the derivative is an ordinary derivative, not a one sided derivative. Ordinary derivatives are the ones you’re normally used
to dealing with in calculus; Another way to define them is that they are not partial derivatives.
A More Formal Definition of a One Sided Derivative
A one sided derivative can be defined more formally as (Fogel, n.d.):
If f is a function on a half closed interval [a, b), then:
The right hand derivative at a, denoted f′[–] is the number
If it exists,
If the function is also defined on a half closed interval
(a, b], then:
The left hand derivative at b, denoted f′[+] is the number
If it exists.
Connection with Limits
In a practical sense, one sided derivatives are analogous to one sided limits.
One sided limits: Limit from the “left” or “right” refers to which (one-sided) direction you approach a limit from.
See also: One sided limits.
Aramanovich, I. et al. (2014). http://Mathematical Analysis: Differentiation and Integration [Print Replica]. Pergamon.
Fogel, M. The Derivative. Retrieved December 29, 2019 from: http://staff.imsa.edu/~fogel/Analysis/PDF/25%20The%20Derivative
Hazelwinkle, M. (1990). Encyclopedia of Mathematics. Springer.
Math Boys. Absolute Value Function. Retrieved August 2019 from: https://www.statisticshowto.com/absolute-value-function/
Reinholz, D. Derivatives. Retrieved December 29, 2019 from: https://www.ocf.berkeley.edu/~reinholz/ed/08sp_m160/lectures/derivatives.pdf
Comments? Need to post a correction? Please Contact Us. | {"url":"https://www.statisticshowto.com/one-sided-derivative/","timestamp":"2024-11-04T15:12:50Z","content_type":"text/html","content_length":"67829","record_id":"<urn:uuid:282b3779-999f-482a-9314-031f615065de>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00490.warc.gz"} |
Stratified mobility fishery models with harvesting outside of no-take areas
Mobility stratification, identifiable from k-means clustering on an appropriate displacement data set, is a common feature of many fish species wherein distinct low-mobility ‘station-keeper’ and
high-mobility ‘ranger’ types are recognized. From recapture records of speckled snapper Lutjanus rivulatus, we develop a Gaussian mixture model of the probability density function for random
displacements by the two types. This leads to a system of two coupled reaction-diffusion equations. We consider a single no-take area (NTA) in one and two dimensions containing a mobility-structured
species. The minimum size of this NTA that leads to species survival is derived and then generalised to a population with n mobility types. Exact non-uniform 1-D steady states are constructed for the
full nonlinear mobility-structured model with lethal (zero density boundary condition) harvesting outside of the NTA. This model is then extended to include an array of evenly spaced NTAs with a
bounded harvesting rate allowed between them. The minimum size of linear, circular and annular NTAs and the maximum sizes of the surrounding fractionally harvested zones that ensure species survival
and connectivity are calculated.
• Coupled reaction-diffusion equations
• Fish mobility
• Gaussian mixture models
• No-take areas
Dive into the research topics of 'Stratified mobility fishery models with harvesting outside of no-take areas'. Together they form a unique fingerprint. | {"url":"https://pure.ul.ie/en/publications/stratified-mobility-fishery-models-with-harvesting-outside-of-no-","timestamp":"2024-11-03T22:15:19Z","content_type":"text/html","content_length":"55712","record_id":"<urn:uuid:a5d0d40d-268c-430e-bc31-af91fdc06479>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00313.warc.gz"} |
Edge-weighted graph
An edge-weighted graph , weighted graph for short , is a graph in graph theory in which a real number is assigned as the edge weight to each edge . Edge-weighted graphs can be directed or undirected.
A graph whose nodes are weighted is called a node-weighted graph .
Weight functions
Edge weights are generally given by an edge weight function. One such weight function is a mapping of the shape
${\ displaystyle d \ colon E \ to \ mathbb {R}}$,
which assigns a real number as weight to each edge . The edge weight of an edge is then denoted by or . ${\ displaystyle e \ in E}$${\ displaystyle d (e)}$${\ displaystyle d_ {e}}$
Metric graph
A complete edge-weighted graph is called metric if for all nodes of the graph ${\ displaystyle a, b, c}$
${\ displaystyle d ({a, c}) \ leq d ({a, b}) + d ({b, c})}$
applies. This means that the route from via to must not be cheaper than the direct route from to . An example of metric graphs distance graphs . ${\ displaystyle a}$${\ displaystyle b}$${\
displaystyle c}$${\ displaystyle a}$${\ displaystyle c}$
For many graph-theoretic problems, additional parameters are required, for example a cost function for determining shortest paths or a capacity function for determining maximum flows . In such a
case, a problem instance is often described by a tuple of the form which, in addition to the graph, contains the desired weight function. ${\ displaystyle (G, d)}$
See also
Individual evidence
1. ↑ Noltemeier, Hartmut: Graph Theoretical Concepts and Algorithms . 3. Edition. Vieweg + Teubner Verlag, Wiesbaden 2012, ISBN 978-3-8348-1849-2 , pp. 74 f . | {"url":"https://de.zxc.wiki/wiki/Kantengewichteter_Graph","timestamp":"2024-11-05T04:25:52Z","content_type":"text/html","content_length":"28048","record_id":"<urn:uuid:1f0e8473-49a8-4e5d-8da5-9ec944a64c3a>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00187.warc.gz"} |
Generalizing Substitution
Generalizing Substitution
RAIRO - Theoretical Informatics and Applications (2010)
• Volume: 37, Issue: 4, page 315-336
• ISSN: 0988-3754
It is well known that, given an endofunctor H on a category C , the initial (A+H-)-algebras (if existing), i.e. , the algebras of (wellfounded) H-terms over different variable supplies A, give rise
to a monad with substitution as the extension operation (the free monad induced by the functor H). Moss [17] and Aczel, Adámek, Milius and Velebil [12] have shown that a similar monad, which even
enjoys the additional special property of having iterations for all guarded substitution rules (complete iterativeness), arises from the inverses of the final (A+H-)-coalgebras (if existing), i.e. ,
the algebras of non-wellfounded H-terms. We show that, upon an appropriate generalization of the notion of substitution, the same can more generally be said about the initial T'(A,-)-algebras resp.
the inverses of the final T'(A,-)-coalgebras for any endobifunctor T' on any category C such that the functors T'(-,X) uniformly carry a monad structure.
Uustalu, Tarmo. "Generalizing Substitution." RAIRO - Theoretical Informatics and Applications 37.4 (2010): 315-336. <http://eudml.org/doc/92726>.
abstract = { It is well known that, given an endofunctor H on a category C , the initial (A+H-)-algebras (if existing), i.e. , the algebras of (wellfounded) H-terms over different variable supplies
A, give rise to a monad with substitution as the extension operation (the free monad induced by the functor H). Moss [17] and Aczel, Adámek, Milius and Velebil [12] have shown that a similar monad,
which even enjoys the additional special property of having iterations for all guarded substitution rules (complete iterativeness), arises from the inverses of the final (A+H-)-coalgebras (if
existing), i.e. , the algebras of non-wellfounded H-terms. We show that, upon an appropriate generalization of the notion of substitution, the same can more generally be said about the initial T'
(A,-)-algebras resp. the inverses of the final T'(A,-)-coalgebras for any endobifunctor T' on any category C such that the functors T'(-,X) uniformly carry a monad structure. },
author = {Uustalu, Tarmo},
journal = {RAIRO - Theoretical Informatics and Applications},
keywords = {Algebras of terms; non-wellfounded terms; substitution; iteration of guarded substitution rules; monads; hyperfunctions; finitely or possibly infinitely branching trees.; terms; guarded
language = {eng},
month = {3},
number = {4},
pages = {315-336},
publisher = {EDP Sciences},
title = {Generalizing Substitution},
url = {http://eudml.org/doc/92726},
volume = {37},
year = {2010},
TY - JOUR
AU - Uustalu, Tarmo
TI - Generalizing Substitution
JO - RAIRO - Theoretical Informatics and Applications
DA - 2010/3//
PB - EDP Sciences
VL - 37
IS - 4
SP - 315
EP - 336
AB - It is well known that, given an endofunctor H on a category C , the initial (A+H-)-algebras (if existing), i.e. , the algebras of (wellfounded) H-terms over different variable supplies A, give
rise to a monad with substitution as the extension operation (the free monad induced by the functor H). Moss [17] and Aczel, Adámek, Milius and Velebil [12] have shown that a similar monad, which
even enjoys the additional special property of having iterations for all guarded substitution rules (complete iterativeness), arises from the inverses of the final (A+H-)-coalgebras (if existing),
i.e. , the algebras of non-wellfounded H-terms. We show that, upon an appropriate generalization of the notion of substitution, the same can more generally be said about the initial T'(A,-)-algebras
resp. the inverses of the final T'(A,-)-coalgebras for any endobifunctor T' on any category C such that the functors T'(-,X) uniformly carry a monad structure.
LA - eng
KW - Algebras of terms; non-wellfounded terms; substitution; iteration of guarded substitution rules; monads; hyperfunctions; finitely or possibly infinitely branching trees.; terms; guarded
UR - http://eudml.org/doc/92726
ER -
1. P. Aczel, Algebras and coalgebras, in Revised Lectures from Int. Summer School and Wksh. on Algebraic and Coalgebraic Methods in the Mathematics of Program Construction, ACMMPC 2000 (Oxford,
April 2000), edited by R. Backhouse, R. Crole and J. Gibbons. Springer-Verlag, Lecture Notes in Comput. Sci.2297 (2002) 79-88.
2. P. Aczel, J. Adámek, S. Milius and J. Velebil, Infinite trees and completely iterative theories: a coalgebraic view. Theor. Comput. Sci.300 (2003) 1-45 .
3. J. Adámek, S. Milius and J. Velebil, Free iterative theories: a coalgebraic view. Math. Struct. Comput. Sci.13 (2003) 259-320.
4. J. Adámek, S. Milius and J. Velebil, On rational monads and free iterative theories, in Proc. of 9th Int. Conf. on Category Theory and Computer Science, CTCS'02 (Ottawa, Aug. 2002), edited by R.
Blute and P. Selinger. Elsevier, Electron. Notes Theor. Comput. Sci.69 (2003).
5. F. Bartels, Generalized coinduction. Math. Struct. Comput. Sci.13 (2003) 321-348.
6. D. Cancila, F. Honsell and M. Lenisa, Generalized coiteration schemata, in Proc. of 6th Wksh. on Coalgebraic Methods in Computer Science, CMCS'03 (Warsaw, Apr. 2003), edited by H.P. Gumm.
Elsevier, Electron. Notes Theor. Comput. Sci.82 (2003).
7. C.C. Elgot, Monadic computation and iterative algebraic theories, in Proc. of Logic Colloquium '73 (Bristol, July 1973), edited by H.E. Rose and J.C. Shepherdson. North-Holland, Stud. Logic Found
Math.80 (1975) 175-230.
8. C.C. Elgot, S.L. Bloom and R. Tindell, On the algebraic structure of rooted trees. J. Comput. Syst. Sci.16 (1978) 362-399.
9. N. Ghani, C. Lüth, F. de Marchi and J. Power, Dualising initial algebras. Math. Struct. Comput. Sci.13 (2003) 349-370.
10. N. Ghani, C. Lüth and F. de Marchi, Coalgebraic monads, in Proc. of 5th Wksh. on Coalgebraic Methods in Computer Science, CMCS'02 (Grenoble, Apr. 2001), edited by L.S. Moss. Elsevier, Electron.
Notes Theor. Comput. Sci.65 (2002).
11. S. Krstic, J. Launchbury and D. Pavlovic, Categories of processes enriched in final coalgebras, in Proc. of 4th Int. Conf. on Foundations of Software Science and Computation Structures,
FoSSaCS'01 (Genova, Apr. 2001), edited by F. Honsell and M. Miculan. Springer-Verlag, Lecture Notes in Comput. Sci.2030 (2001) 303-317.
12. M. Lenisa, From set-theoretic coinduction to coalgebraic coinduction: some results, some problems, in Proc. of 2nd Wksh. on Coalgebraic Methods in Computer Science, CMCS'99 (Amsterdam, March
1999), edited by B. Jacobs and J. Rutten. Elsevier, Electron. Notes Theor. Comput. Sci.19 (1999).
13. E.G. Manes, Algebraic theories, Graduate Texts in Mathematics26. Springer-Verlag, New York (1976).
14. R. Matthes and T. Uustalu, Substitution in non-wellfounded syntax with variable binding, in Proc. of 6th Wksh. on Coalgebraic Methods in Computer Science, CMCS'03 (Warsaw, Apr. 2003), edited by
H.P. Gumm. Elsevier, Electron. Notes Theor. Comput. Sci.82 (2003).
15. S. Milius, On iteratable endofunctors, in Proc. of 9th Int. Conf. on Category Theory and Computer Science, CTCS'02 (Ottawa, Aug. 2002), edited by R. Blute and P. Selinger. Elsevier, Electron.
Notes Theor. Comput. Science69 (2003).
16. L.S. Moss, Parametric corecursion. Theor. Comput. Sci.260 (2001) 139-163 .
17. R. Paterson, Notes on monads for functional programming, unpublished draft (1995).
18. T. Uustalu, (Co)monads from inductive and coinductive types, in Proc. of 2001 APPIA-GULP-PRODE Joint Conf. on Declarative Programming, AGP'01 (Évora, Sept. 2001), edited by L.M. Pereira and P.
Quaresma. Dep. de Informática, Univ. do Évora (2001) 47-61.
19. T. Uustalu and V. Vene, Primitive (co)recursion and course-of-value (co)iteration, categorically. Informatica10 (1999) 5-26.
20. T. Uustalu and V. Vene, The dual of substitution is redecoration, in Trends in Functional Programming 3, edited by K. Hammond and S. Curtis. Intellect, Bristol & Portland, OR (2002) 99-110.
21. T. Uustalu, V. Vene and A. Pardo, Recursion schemes from comonads. Nordic J. Comput.8 (2001) 366-390.
You must be logged in to post comments.
To embed these notes on your page include the following JavaScript code on your page where you want the notes to appear. | {"url":"https://eudml.org/doc/92726","timestamp":"2024-11-04T20:34:07Z","content_type":"application/xhtml+xml","content_length":"44687","record_id":"<urn:uuid:55ac8e70-891f-4fdb-924d-06bff2f62041>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00333.warc.gz"} |
The value of the expression cosec−1(2)+cos−1(21)+tan−1(−1) is... | Filo
Question asked by Filo student
The value of the expression is
Not the question you're searching for?
+ Ask your question
Video solutions (2)
Learn from their 1-to-1 discussion with Filo tutors.
13 mins
Uploaded on: 10/14/2022
Was this solution helpful?
Found 4 tutors discussing this question
Discuss this question LIVE for FREE
10 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Students who ask this question also asked
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text The value of the expression is
Updated On Jan 5, 2023
Topic Calculus
Subject Mathematics
Class Class 12
Answer Type Video solution: 2
Upvotes 247
Avg. Video Duration 19 min | {"url":"https://askfilo.com/user-question-answers-mathematics/the-value-of-the-expression-is-32333534313331","timestamp":"2024-11-11T16:18:38Z","content_type":"text/html","content_length":"285426","record_id":"<urn:uuid:b4105ddb-fd95-4111-9ea0-23469fd3f0ce>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00443.warc.gz"} |
Unscramble OBIA
How Many Words are in OBIA Unscramble?
By unscrambling letters obia, our Word Unscrambler aka Scrabble Word Finder easily found 12 playable words in virtually every word scramble game!
Letter / Tile Values for OBIA
Below are the values for each of the letters/tiles in Scrabble. The letters in obia combine for a total of 6 points (not including bonus squares)
What do the Letters obia Unscrambled Mean?
The unscrambled words with the most letters from OBIA word or letters are below along with the definitions.
• obia () - Sorry, we do not have a definition for this word | {"url":"https://www.scrabblewordfind.com/unscramble-obia","timestamp":"2024-11-11T05:17:05Z","content_type":"text/html","content_length":"37680","record_id":"<urn:uuid:9a5621d5-9312-4e06-a1a6-e19d2e74c706>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00144.warc.gz"} |
Fast Integer Square Roots
Related functions and programs:
NBR_UTIL - prime number and integer square root functions; e.g., nbrUnsignedSQRT().
squint - integer square root algorithms test program.
NOTE: The algorithm works and achieves its speed by examining and manipulating bits in the standard, binary representation of unsigned integers. In the rare case of a target CPU using a different
encoding or representation of integers, then another algorithm must be found. Examples of different representations include the various forms of binary-coded decimal and Gray code numbers. I vaguely
recall, from the 1970s or 1980s, attempts to create 4-level logic elements that would have made possible quarternary representation of numbers.
Function nbrUnsignedSQRT() quickly computes the floor of the square root of an unsigned integer: floor[√n]; in other words, the largest integer equal to or less than the number's real square root. In
the file prolog, I give the example of 34, whose real square root is about 5.8, but nbrUnsignedSQRT() returns 5. (The function returns a square root of 5 for 25, 26, ..., 34, and 35, only stepping up
to a root of 6 for numbers 36 through 48.)
A widely cited algorithm developed by Ross M. Fosler is used:
"Fast Integer Square Root" (PDF) by Ross M. Fosler
Application Note TB040, DS91040A, © 2000 Microchip Technology Inc.
combined with an ingenious tweak from Tristan Muntsinger:
http://www.codecodex.com/wiki/Calculate_an_integer_square_root#C (fourth code window under C) (Wayback Machine)
NOTE: I found Ross Fosler's flowchart confusing and I wasn't familiar with the PIC18 instruction set used in his code, so I reverse-engineered the algorithm from Example 1 of his application note and
I implemented that algorithm in C. The reverse-engineered algorithm is described in the following paragraphs. I later came across documentation for the PIC18 instruction set and I was able to study
his assembly language code. I was surprised to find that Fosler's algorithm was slightly different than "my" algorithm. Specifically, he handled the "...10..." to "...01..." transformation (or "Shift
bit right" as it's called in the application note) in an unusual, functionally equivalent way which might have been more advantageous, performance-wise, on the PIC controller. So the description
below applies equally to his and my variations on his algorithm, including the limitations and risks of overflow I bring up. (I discuss the assembly language algorithm in further detail down at the
end when I introduce Tristan Muntsinger's optimization.)
The algorithm begins with an initial estimate of the square root of the target number. The estimate is based on the integer square root of the maximum representable value for the unsigned integer
type. Assume this type is m bits wide (where m is even).
• The maximum representable value — let's call it UM_MAX — is the value when all m bits are set. (C's ULONG_MAX is used in the code.)
• The square root of UM_MAX — let's call it UM_MAX_SQRT — is the value when all the bits are set in the least significant half of the integer type; i.e., the m/2 least significant bits are all set.
• Take the most significant 1 bit in UM_MAX_SQRT and set the corresponding bit in the estimated square root. This is bit (m/2)-1 (zero-based bit numbering) and the value of this initial estimate is
The algorithm then exams progressively less significant bits in the intermediate square root, bouncing above and below the actual square root on its way as bits are set or cleared, until the
algorithm finally zeroes in (pun sort of intended) on the absolute least significant bit (bit 0) of the actual integer square root.
For example, let unsigned integers be 16 bits wide. UM_MAX is 0xFFFF (65,535 decimal); UM_MAX_SQRT is then the lower half of the word with all bits set: 0x00FF (255 decimal). Taking the most
significant 1 bit (bit 16/2-1 = 7) from UM_MAX_SQRT gives an initial estimate of 0x0080 (128, or half of 256). In binary, the algorithm begins with an estimated square root of:
The adjacent 0 bit (bit 6) to the right of the 1 bit is examined first. If the square of the intermediate root (0x0080) is less than the target number (i.e., the intermediate root is less than the
actual root), then set that bit in order to increase the value (0x00C0, 192 decimal) of the intermediate root, possibly overshooting the actual root:
For illustrative purposes, let's assume that was the case. Now move to the next 0 bit, bit 5. If the intermediate root is less than the actual root, the algorithm would set bit 5 (0x00E0 — 3 bits
set) and loop again. (I'm using shorthand here for what is actually a comparison of the square of the intermediate root and the target number.) Ross Fosler calls setting a bit under this condition
"Start new bit" in his flowchart.
If, however, the intermediate root is greater than the actual root, lower the intermediate root by clearing the previous bit (bit 6) and setting the current bit (bit 5), giving a value of 0x00A0 and
possibly undershooting the actual root:
The pattern here is that if the previous and current bits are 10 (binary) and the intermediate root is too large, then substitute 01 (binary) for the two bits. This is essentially picking the value,
01, halfway between 00 (tested when the previous bit was examined) and 10 (the result of the previous bit being examined). Ross Fosler calls this "...10..." to "...01..." transformation "Shift bit
right" in his flowchart.
Perhaps more clearly, let the number in curly braces be the number of most significant bits of the root determined so far. Following the example above, the initial estimate is:
estimate{0} = 0000 0000 1000 0000
The estimate is less than the actual square root, so the most significant bit has been determined and we next try setting the adjacent bit:
estimate{1} = 0000 0000 1100 0000
(Although the first two bits are set, it's only estimate{1}. We know the left bit is correct, but the right bit is, as yet, only a trial bit.) This estimate is greater than the square root and we now
know that the root must fall between the previous estimate and the current estimate:
estimate{0} < root < estimate{1}
So we pick a new estimate halfway between the two estimates. This requires clearing the bit we set in estimate{1}, so we know for certain the two most significant bits ("10" instead of "11") of the
root and we add a third trial bit ("1"):
estimate{2} = (estimate{0} + estimate{1}) / 2
= 0000 0000 1010 0000
Although my example happened to fall at the very beginning of the estimates, the same logic is applied at an arbitrary bit location in the estimates:
estimate{b+2} = (estimate{b} + estimate{b+1}) / 2
Finally, it's time to consider how the algorithm terminates. If an intermediate root is equal to the actual square root, the algorithm can exit the loop immediately and return the square root to the
calling program. Otherwise, the algorithm will examine all the bits in the root down to the least significant bit. A bit in a bit mask keeps track of the current bit under examination; the bit is
shifted right on each iteration of the algorithm. After the algorithm examines the least significant bit of the root, the tracking bit is shifted completely out of the bit mask, leaving the bit mask
equal to zero and terminating the algorithm.
Note that the "...10..." to "...01..." transformation can take place when the very first estimate, the initial estimate, is greater than the square root. In the 16-bit example above, the initial
estimate can be shifted right bit by bit until a value less than or equal to the root is found. Suppose we're finding the square root of 529, which is 23 (0x17, 10111 binary). The square root will be
refined as follows at each step:
Precomputed initial estimate is 128
Estimate 128 > actual root 23, Shift bit right
Estimate 64 > actual root 23, Shift bit right
Estimate 32 > actual root 23, Shift bit right
Estimate 16 < actual root 23, Start new bit
Estimate 24 > actual root 23, Shift bit right
Estimate 20 < actual root 23, Start new bit
Estimate 22 < actual root 23, Start new bit
Estimate 23 == actual root 23, return!
When the width, m, of an unsigned integer is even, only the lower half of the integer is manipulated and the maximum value of that lower half is UM_MAX_SQRT — all m/2 bits set. Consequently, there is
no danger of overflow in the loop if the algorithm computes the square of an intermediate root for the purpose of comparing it to the target number; e.g., "(root × root) < number".
However, if m is odd, the square root of the maximum representable value, UM_MAX, occupies (m+1)/2 bits and not all bits in that lower "half" are set. Consequently, computing "root × root" could
overflow. For example, imagine a CPU that represents all integers, signed and unsigned, as 16-bit, signed-magnitude numbers where the most significant bit is the sign. The precision of integers is
effectively 15 bits and:
UM_MAX = 0x7FFF (32767 decimal) 0111 1111 1111 1111
UM_MAX_SQRT = 0x00B5 (181 decimal) 0000 0000 1011 0101
The most significant bit of UM_MAX_SQRT is, as with a full 16-bit word, bit 7:
This is the initial estimate (128) of the square root and, if the estimate is less than the actual root (i.e., the target number is between 129 and UM_MAX_SQRT), the algorithm will set bit 6 (0x00C0,
192 decimal):
192 is greater than UM_MAX_SQRT. In the next iteration for bit 5, multiplying 192 times 192 will overflow UM_MAX. My code avoids this by comparing the intermediate root to the target number divided
by the intermediate root; e.g., the less-than comparison is transformed thus, which removes the possibility of overflow:
(root * root) < number ==> root < (number / root)
(I didn't need to look as far afield as signed-magnitude numbers! I later came across someone else's function that computes the square root of a signed, two's-complement, 32-bit integer. The
effective precision in this case is 31 bits and, when multiplying "root × root", the function would regard a negative result as a sign of overflow.)
My implementation of the algorithm differs from Ross Fosler's in the following ways:
• Fosler's and others' integer square root functions hard-code the initial estimate. Consequently, there may be separate functions for computing the square root of, for example, 16- and 32-bit
numbers, each function with its own, appropiate, hard-coded initial estimate. Separate functions were necessary in Fosler's case because he was writing in assembly language for a specific class
of microcontrollers and the handling of 32-bit numbers at that level is more involved than the handling of 16-bit numbers.
My C function accepts a generic, "unsigned long" integer and returns the unsigned long square root of the integer. On the first call to nbrUnsignedSQRT(), the function precomputes and caches (i)
how many bits wide an unsigned long is and (ii) the initial estimate, which has a single bit set corresponding to the most significant bit of the maximum square root. Thus, nbrUnsignedSQRT() will
work whatever the size of an unsigned long in the target compiler/CPU environment.
The precomputed parameters are normally stored internally in nbrUnsignedSQRT(). Tasks running in a shared address space can each allocate their own memory for the parameters; see nbrUnsignedSQRT
()'s context argument.
• Because of the danger of overflow when using integers with an odd number of bits, I at first used the non-overflowing "root relop number/root" comparisons in nbrUnsignedSQRT(). As Ross Fosler
knew before me, division operations are expensive, even on an Intel-based Linux PC. I mean, I knew that division operations are expensive, but I'll admit I was surprised when I saw how much of an
impact the division operations had on nbrUnsignedSQRT()'s performance.
To increase the performance in the more common case, my function chooses the correct form of comparisons based on the bit width of unsigned longs. For even-precision unsigned integers,
nbrUnsignedSQRT() can safely use these fast comparisons:
without fear of overflow. For odd-precision unsigned integers, nbrUnsignedSQRT() reverts to the slow comparisons:
• Ross Fosler's algorithm calculates the nearest integer square root of a number when floor[√n] is even. For example,
sqrt(15450) = 124.3 integer root is 124
sqrt(15475) = 124.4 integer root is 124
sqrt(15500) = 124.5 integer root is 125
sqrt(15525) = 124.6 integer root is 125
This makes sense since the least significant bit of an even root is "0" and thus offers a place for the algorithm to "start a new bit" right before shifting the bit-under-examination bit out of
its variable and, since that bit mask is now zero, terminating the algorithm.
Odd roots already have a least significant bit of "1", so there is no way to "start a new bit" and incrementing the bit to round up the root would propagate changes back through the more
significant bits. For example,
sqrt(81) = 9 integer root is 9
sqrt(86) = 9.3 integer root is 9
sqrt(89) = 9.4 integer root is 9
sqrt(91) = 9.5 integer root is 9
sqrt(92) = 9.6 integer root is 9
sqrt(99) = 9.95 integer root is 9
sqrt(100) = 10 integer root is 10
I wanted nbrUnsignedSQRT() to return the largest integer less than or equal to the actual square root — floor[√n]. My function originally performed an extra "iteration" that transformed the least
significant bit and a virtual first fractional bit from "1.0" to "0.1", which resulted in the desired behavior.
Tristan Muntsinger's optimization moves the shift and test of the bit-under-examination mask above/before the "start new bit" operation, so his algorithm terminates before it can "start a new
bit" in the least-significant bit location. Since his algorithm always returns floor[√n], when I incorporated his optimization into my code, the extra iteration was no longer necessary.
As an addendum to the differences above, I show here how my code evolved with better understanding and with exposure to others' algorithms.
Using Tristan Muntsinger's shorter and cleaner variable names, here's my original interpretation of Ross Fosler's algorithm, based on his example:
-- Variable names: g = root, c = bitmask, n = number
g = 1 -- Find most significant bit of square root.
while (g < n/g) -- (Instead of hard-coding 0x8000.)
g <<= 1
g >>= 1
c = g >> 1 -- Refine value of square root.
while (c != 0) {
if (g > n/g) -- Intermediate root too high?
g ^= c << 1 -- Begin changing "...10..." to "...01..."
else if (g == n/g) -- Final square root?
c = 0 -- Exit loop.
g |= c -- Insert 1-bit if root too high or low.
c >>= 1
if (g > n/g) g-- -- Ensure root is floor[√n].
return (g)
A little loose and slow. Here's Ross Fosler's actual, much tighter algorithm, gleaned from his PIC code:
-- Variable names: g = RES, c = BITLOC, n = ARGA, save = TEMP
save = 0
g = c = 0x8000
do {
if (g*g > n) -- Intermediate root too high?
g = save -- Get last guess with current bit cleared.
save = g -- Save current guess before setting next bit.
c >>= 1
g |= c -- Insert 1-bit.
} while (c != 0)
return (g) -- Final root may not be floor[√n].
ASIDE: There are no PIC18 arithmetic shift instructions, only rotate instructions, with or without the carry flag. Fosler's Sqrt16() function computes the 8-bit root of a 16-bit number. The location
bit, c/BITLOC, is shifted right with a rotate right without the carry flag. The algorithm terminates not when c/BITLOC is zero, but when the location bit circles around from bit 0 to bit 7 of c/
BITLOC! His Sqrt32() function already has to use the rotate right with carry instruction to shift the two-byte c/BITLOC, so the algorithm terminates after the location bit rotates out of bit 0 of c/
BITLOC into the carry flag. Testing bit 7 in Sqrt16() and the carry flag in Sqrt32() are semantically equivalent to testing if c/BITLOC is zero.
When the intermediate root is too high, my code handled the "...10..." to "...01..." transition by clearing the previous bit and setting the current bit. Ross Fosler's code retrieved the last good
value (in which the current bit was zero) and set the next bit. (NOTE that my code's "current bit" is in advance of Fosler's and Muntsinger's current bits; i.e., my current bit is the 0-bit in
"...10..." while Fosler's and Muntsinger's current bit is the immediately preceding 1-bit. This happens when you try to deduce an algorithm from an example!)
Tristan Muntsinger's ingenious tweak to the general algorithm eliminates the need for my previous-bit fiddling or Fosler's last-good-value variable by moving the test for the final square root (last
bit shifted out of bitmask c, leaving it zero),
c >>= 1 -- Fragment of Fosler's code.
g |= c
} while (c != 0)
up before the insertion of the 1-bit:
c >>= 1 -- Fragment of Muntsinger's code.
if (c == 0) return (g)
g |= c
Here's his algorithm in full:
g = c = 0x8000
for ( ; ; ) {
if (g*g > n) g ^= c
c >>= 1
if (c == 0) return (g)
g |= c
Muntsigner's algorithm also ensures that floor[√n] is returned. Ultimately, I adopted his algorithm, adapting it for whatever size, fixed-width, unsigned long integers supported by the platform. The
characteristics of the platform are computed on the first call to nbrUnsignedSQRT() and cached for subsequent use. The characteristics are (i) the precision, odd or even, of unsigned longs and (ii)
the most significant bit of the maximum square root, √ULONG_MAX.
My final algorithm, with support for odd-precision numbers:
-- e.g., 0x80000000 (64-bit).
g = c = cached-most-significant-bit
for ( ; ; ) {
if (cached-odd-precision-flag) {
if (g < n/g) g ^= c -- Odd precision requires division.
} else if (g*g < n) { -- Even precision allows multiplication.
g ^= c
c <<= 1
if (c == 0) return (g)
g |= c
Ross Fosler's algorithm is a straightforward (now that he came up with it for us!) and efficient means of computing the integer square root of a number. As Fosler says in his application note, the
algorithm is fast (i) when compared to an adaptation of the Newton-Raphson method to integers and (ii) because it avoids division operations, which the Newton-Raphson method would require. The latter
was especially important because division was a slow operation on the microprocessor he was targeting.
(I don't know if Fosler invented the algorithm or if the algorithm is independently discovered by people who put their mind to it. His application note, however, is widely cited on the internet and
in some academic papers.)
As I mentioned in the Differences section, my use of division to avoid overflow in root to number comparisons, "root relop number/root", was very slow and switching to "root×root relop number" for
even-precision integers produced a big performance boost. The former, division-based comparisons are, of necessity, still used for odd-precision integers, so computing the square root on those
platforms will be slow.
I benchmarked different algorithms using my integer square root test program, squint. The following algorithms, identified by their name on the test program's command line, can be tested:
nbr - the default nbrUnsignedSQRT() in the separate LIBGPL library.
alternate - an alternate function used to test changes that will, if they work, eventually be incorporated into nbrUnsignedSQRT().
crenshaw - is Jack Crenshaw's integer square root function. His original column, "Integer Square Roots", appeared in Embedded Systems Programming, February 1, 1998. Unfortunately, the web page is
missing the images and code listings.
The algorithm is also explained in identical or more detail in chapter 4 of his book, Math Toolkit for Real-Time Development, later renamed Math Toolkit for Real-Time Programming. The code I used is
the one presented in Listing 4 of his column and Listing 4.7 of his book. (If you know how to use Google, you can find chapter 4 online at Google Books.)
martin - is Martin Buchanan's function, which bears some resemblance to Crenshaw's function. I used his 3rd integer square root function found at the Code Codex:
ross - is my original interpretation of Ross Fosler's algorithm.
tristan - is Tristan Muntsinger's integer square root function, which is a variation of Ross Fosler's algorithm. Martin Buchanan included this function on the same Code Codex page:
The "crenshaw", "martin", and "tristan" functions were modified to work at different precisions and were tested using 64-bit unsigned longs. All the functions use the same, one-time determination of
precision as nbrUnsignedSQRT(). I don't yet understand how they work, so my versions of Crenshaw's and Martin's functions bail out when faced with odd-precision integers.
Since no changes are being contemplated, the "alternate" function is currently identical to nbrUnsignedSQRT(). And Tristan Muntsinger's function is nearly identical to the first two. Consequently, we
should expect similar timing numbers.
Instead, I was met with very odd timing. My testing platform was an old PC running 64-bit Fedora Core 15, GCC 4.6.3 20120306, and with an "AMD Athlon(tm) 64 X2 Dual Core Processor 3800+" CPU. (I also
did some tests on a 2015 or 2016 laptop running Linux Mint Mate 17 or 18 and on an Android 5.1 tablet with a 32-bit app.) I won't go into the details of the anomalies I saw except to mention the
oddest one.
ASIDE: While researching this problem, I came across this lengthy StackOverflow discussion of the x86 popcnt instruction and pipelining. The title, "Replacing a 32-bit loop counter with 64-bit
introduces crazy performance deviations", is innocently misleading because the observed performance deviations are not caused by the loop counter. It's scary that you can't write "deterministic" code
that has consistent performance. The performance of your code is dependent, no pun intended, on what's in the pipeline, which version of a processor you're using, hidden dependencies in the
instruction set, the compiler writers, etc. As with division operations being expensive, I know all these things, but this discussion really made plain to me how insidious they can be.
NOTE: To avoid confusion, the following paragraph purposefully speaks of my square root test program, squint. At the time of the testing described in the paragraph, I was actually testing both prime
number algorithms and square root algorithms with a single program, prime. As hinted at in the succeeding paragraph, I eventually split prime into two, more sensibly focused programs: primal and
LIBGPL's nbrUnsignedSQRT() is identical to squint's "alternate" function, both in the C code and in the GCC-generated, "-O1" assembly listings: the same sequence of instructions, the same registers,
and the same addressing modes. Yet nbrUnsignedSQRT() took 14 seconds for the 100 million calculations, while squint's unsignedSQRT_A() took 17 seconds. Why the big difference? The assembly code for
the call site in squint simply loaded the function pointer, sqrtF, into a register and made an indirect call to the selected function through the register. So there was no differentiation in the
function call between the external nbrUnsignedSQRT() and the internal, static unsignedSQRT_A(). Moving unsignedSQRT_A() to a separate source file resulted in the timing for the alternate function
dropping from 17 seconds to 15 seconds, a significant reduction, but still one second slower than nbrUnsignedSQRT().
Ultimately, I moved the square root program and different functions into squint.c and tested the algorithms using (i) no optimization, (ii) "-O1", and (iii) "-O3". I don't know why, but I got what
seemed like reasonable results from this setup. The command lines looked as follows (√15241578750190521 = 123456789):
% squint 15241578750190521[,algo] -time -repeat 100000000
With no optimization, "-O0, all the algorithms took longer than 30 seconds.
Using "-O1" in LIBGPL and in squint:
LIBGPL sqrt() ...
14.44 seconds (nbr)
14.40 seconds (nbr)
Alternate sqrt() ...
14.42 seconds (alt)
14.42 seconds (alt)
Crenshaw sqrt() ...
21.21 seconds (crenshaw)
21.27 seconds (crenshaw)
Martin sqrt() ...
11.30 seconds (martin)
11.21 seconds (martin)
Tristan sqrt() ...
14.31 seconds (tristan)
14.32 seconds (tristan)
Note that Martin Buchanan's function takes about 11 seconds and Jack Crenshaw's function is nearly twice as slow, taking about 21 seconds.
Using "-O3":
LIBGPL sqrt() ...
14.37 seconds (nbr)
14.27 seconds (nbr)
Alternate sqrt() ...
14.25 seconds (alt)
14.24 seconds (alt)
Crenshaw sqrt() ...
10.81 seconds (crenshaw)
10.80 seconds (crenshaw)
Martin sqrt() ...
8.94 seconds (martin)
8.95 seconds (martin)
Tristan sqrt() ...
14.33 seconds (tristan)
14.29 seconds (tristan)
Buchanan's function drops a little over 2 seconds, but Crenshaw's cuts its "-O1" times in half!
Conclusion: Buchanan's and Crenshaw's functions are sensitive to optimization levels. The functions based on Tristan Muntsinger's algorithm have pretty consistent performance at any non-zero level of
All in all, I've been pleased with the performance of Fosler's and Muntsinger's algorithms and I'm satisfied with the decision to use their algorithms in nbrUnsignedSQRT().
A good book on designing and developing PIC18-based projects is Han-Way Huang's, PIC Microcontroller: An Introduction to Software and Hardware Interfacing. I used Chapter 2, "PIC18 Assembly Language
Programming", to figure out how Ross Fosler's code worked. The chapter is a good tutorial, but groups of related instructions are listed in tables scattered throughout the chapter and I had to browse
around looking for the instructions I was trying to decipher. (And if you're serious about PIC18 programming, you'll need Chapter 1 to learn about the CPU architecture: registers, memory layout,
etc.) Section 4.10.1, in Chapter 4, presents a flowchart for computing an integer square root that is basically the same as Ross Fosler's. The book's flowchart is at a more abstract level, using a
counter, i, and array notation, NUM[i], to address individual bits in a number. Actually implementing the book's algorithm would require bit masking and shifting ... in short, designing and writing
code similar to Fosler's. (The PIC18 does have bit set, clear, and toggle instructions that operate on numbered bits — 0-7 in a byte! Fosler was counting clock cycles, so determining the byte and bit
offset of a numbered bit in a 16-bit word would be more trouble and more time consuming than bit masking and shifting.)
If you can't get a hold of Huang's book, you can look up information about the PIC18 architecture on the web and couple that knowledge with "PIC18F Instruction Set" to figure out Fosler's code.
(Appendix D from Microcontroller Theory and Applications with the PIC18F by M. Rafiquzzaman.) | {"url":"http://geonius.com/writing/other/cmoon.html","timestamp":"2024-11-10T14:19:21Z","content_type":"text/html","content_length":"40488","record_id":"<urn:uuid:3ffc6728-bf4f-44b7-93c2-231b532d4547>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00046.warc.gz"} |
Three Rules of Statistical Analysis to Unlearn
There are important ‘rules’ of statistical analysis. Like
• Always run descriptive statistics and graphs before running tests
• Use the simplest test that answers the research question and meets assumptions
• Always check assumptions.
But there are others you may have learned in statistics classes that don’t serve you or your analysis well once you’re working with real data.
When you are taking statistics classes, there is a lot going on. You’re learning concepts, vocabulary, and some really crazy notation. And probably a software package on top of that.
In other words, you’re learning a lot of hard stuff all at once.
Good statistics professors and textbook authors know that learning comes in stages. Trying to teach the nuances of good applied statistical analysis to students who are struggling to understand basic
concepts results in no learning at all.
And yet students need to practice what they’re learning so it sticks. So they teach you simple rules of application. Those simple rules work just fine for students in a stats class working on
sparkling clean textbook data.
But they are over-simplified for you, the data analyst, working with real, messy data.
Here are three rules of data analysis practice that you may have learned in classes that you need to unlearn. They are not always wrong. They simply don’t allow for the nuance involved in real
statistical analysis.
The Rules of Statistical Analysis to Unlearn:
1. To check statistical assumptions, run a test. Decide whether the assumption is met by the significance of that test.
Every statistical test and model has assumptions. They’re very important. And they’re not always easy to verify.
For many assumptions, there are tests whose sole job is to test whether the assumption of another test is being met. Examples include the Levene’s test for constant variance and Kolmogorov-Smirnov
test, often used for normality. These tests are tools to help you decide if your model assumptions are being met.
But they’re not definitive.
When you’re checking assumptions, there are a lot of contextual issues you need to consider: the sample size, the robustness of the test you’re running, the consequences of not meeting assumptions,
and more.
What to do instead:
Use these test results as one of many pieces of information that you’ll use together to decide whether an assumption is violated.
2. Delete outliers that are 3 or more standard deviations from the mean.
This is an egregious one. Really. It’s bad.
Yes, it makes the data look pretty. Yes, there are some situations in which it’s appropriate to delete outliers (like when you have evidence that it’s an error). And yes, outliers can wreak havoc on
your parameter estimates.
But don’t make it a habit. Don’t follow a rule blindly.
Deleting outliers because they’re outliers (or using techniques like Winsorizing) is a great way to introduce bias into your results or to miss the most interesting part of your data set.
What to do instead:
When you find an outlier, investigate it. Try to figure out if it’s an error. See if you can figure out where it came from.
3. Check Normality of Dependent Variables before running a linear model
In a t-test, yes, there is an assumption that Y, the dependent variable, is normally distributed within each group. In other words, given the group as defined by X, Y follows a normal distribution.
ANOVA has a similar assumption: given the group as defined by X, Y follows a normal distribution.
In linear regression (and ANCOVA), where we have continuous variables, this same assumption holds. But it’s a little more nuanced since X is not necessarily categorical. At any specific value of X, Y
has a normal distribution. (And yes, this is equivalent to saying the errors have a normal distribution).
But here’s the thing: the distribution of Y as a whole doesn’t have to be normal.
In fact, if X has a big effect, the distribution of Y, across all values of X, will often be skewed or bimodal or just a big old mess. This happens even if the distribution of Y, at each value of X,
is perfectly normal.
What to do instead:
Because normality depends on which Xs are in a model, check assumptions after you’ve chosen predictors.
The best rule in statistical analysis: always stop and think about your particular data analysis situation.
If you don’t understand or don’t have the experience to evaluate your situation, discuss it with someone who does. Investigate it. This is how you’ll learn.
1. Tsigereda Kebede says
Thank you for this important information
2. Miguela Napiere says
Thank you so much! This is a great help.
3. Naomi says
Thank-you for this. I don’t know how many times I have had to explain these points (especially the outlier). I never do it so clearly will be sending people the link from now on.
4. D.RAMANATHAN says
The three rules quoted are briefly explained here and form as shields for the juniors Statisticians to travel fast into Statistical Analysis.
I am taking Statistics for the Management Studies students of the Management Schools at Madras Chennai Tamilnadu,India.
Leave a Reply Cancel reply | {"url":"https://www.theanalysisfactor.com/three-rules-statistical-analysis-unlearn/","timestamp":"2024-11-07T07:34:07Z","content_type":"text/html","content_length":"74452","record_id":"<urn:uuid:fab26113-3ddb-4c37-84cb-2d6915ea216d>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00811.warc.gz"} |
What is the probability that two events ANB will occur at the same time?
Use the specific multiplication rule formula. Just multiply the probability of the first event by the second. For example, if the probability of event A is 2/9 and the probability of event B is 3/9
then the probability of both events happening at the same time is (2/9)*(3/9) = 6/81 = 2/27.
What is the probability that either event A or event B occurs if both A and B events are mutually exclusive?
If Events A and B are mutually exclusive, P(A ∩ B) = 0. The probability that Events A or B occur is the probability of the union of A and B.
What is the difference between AUB and AnB?
Union The union of two sets A and B, written A U B, is the combination of the two sets. Intersection The intersection of two sets A and B, written AnB, is the overlap of the two sets.
What does P AUB ‘) mean?
P(A U B
P(A U B) is the probability of the sum of all sample points in A U B. Now P(A) + P(B) is the sum of probabilities of sample points in A and in B.
When 2 or more events occur in conjunction with each other their joint occurance is?
Compound event refers to the joint occurrence of two or more simple events.
When two events Cannot occur at the same time they are said to be?
In statistics and probability theory, two events are mutually exclusive if they cannot occur at the same time. The simplest example of mutually exclusive events is a coin toss.
What is the intersection of A and B in probability?
Intersection of A and B The intersection of events A and B, written as P (A ∩ B) or P (A AND B) is the joint probability of at least two events, shown below in a Venn diagram. In the case where A and
B are mutually exclusive events, P (A ∩ B) = 0. Consider the probability of rolling a 4 and 6 on a single roll of a die; it is not possible.
What is the probability that events A and B both occur?
The probability that Events A and B both occur is the probability of the intersection of A and B. The probability of the intersection of Events A and B is denoted by P(A ∩ B). If Events A and B are
mutually exclusive, P(A ∩ B) = 0. Since the question cites…
How do you interpret the intersection of a table?
Interpreting the table. Certain things can be determined from the joint probability distribution. Mutually exclusive events will have a probability of zero. All inclusive events will have a zero
opposite the intersection. All inclusive means that there is nothing outside of those two events: P(A or B) = 1.
What is the meaning of PP(B|a)?
P(B|A) This only applies when the events are independent of each other meaning event A has no effect on the probability of event B happening. The other case involes these two events when they are | {"url":"https://profoundadvices.com/what-is-the-probability-that-two-events-anb-will-occur-at-the-same-time/","timestamp":"2024-11-03T03:34:21Z","content_type":"text/html","content_length":"57613","record_id":"<urn:uuid:84bc135e-5533-40e5-b058-f034839e0ed5>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00866.warc.gz"} |
Mathematical Computer Science Faculty Areas
Daniel J. Bernstein, Research Assistant Professor
□ Ph.D. Berkeley, 1995
□ Computational number theory.
Neil E. Berger, Associate Professor
□ Ph.D. NYU 1968
□ Applied elasticity, fluid dynamics, scattering theory, numerical analysis, symbolic manipulation.
Martin Grohe, Assistant Professor
□ Ph.D.
□ Graph theory and algorithms, complexity theory, logic, database theory.
Robert L. Grossman, Professor of Computer Science, Director of Laboratory for Advanced Computing Director of National Center for Data Mining, and President of Magnify, Inc.;
□ Ph.D. Princeton 1985
□ Data intensive computing and data mining, symbolic and numeric computation, hybrid systems, digital libraries, and industrial mathematics.
Floyd B. Hanson, Professor
□ Ph.D. Brown 1968
□ Numerical analysis, asymptotic methods, stochastic dynamical systems modeling, stochastic optimal control, scientific supercomputing, scientific visualization, parallel scheduling, industrial
Richard Larson, Professor
□ Ph.D. Chicago 1965
□ Hopf algebras and quantum groups, control theory, algorithms of algebras.
Jeffrey Leon, Professor
□ Ph.D. Cal Tech 1971
□ Group theory and combinatorics, computer methods in group theory and combinatorics, algorithms.
Alexander Lipton (Lifschitz), Adjunct Professor
□ Ph.D. Moscow State Univ. 1982
□ Financial engineering, mathematical physics, computational methods.
Glenn Manacher, Associate Professor
□ Ph.D. Carnegie-Mellon 1961
□ Algorithms, complexity, computer language design.
Uri Peled, Professor
□ Ph.D. Waterloo 1976
□ Combinatorial optimization, graph theory, combinatorics.
Vera Pless, Professor
□ Ph.D. Northwestern 1957
□ Coding theory, combinatorics.
Steven Smith, Professor
□ Ph.D. Oxford 1973
□ Finite groups, representation theory, computational methods.
Jeremy Teitelbaum, Associate Professor - vita and publication list
□ Ph.D. Harvard 1986
□ Number theory.
Charles Tier, Professor
□ Ph.D. NYU 1976
□ Analysis of stochastic models, queuing theory, computer performance evaluation, telecommunications modeling, numerical analysis.
Gyorgy Turan, Professor
□ Ph.D. Jozsef A., Szeged 1982
□ Complexity theory, logic, combinatorics.
Jan Vershelde, Assistant Professor
□ Ph.D. K.U.Leuven 1996
□ Application of polynomial homotopy continuation to scientific and engineering problems, computational algebraic geometry, symbolic-numeric computation, combinatorial and polyhedral methods,
and mathematical software and applications.
Jennifer D. Wagner, Research Assistant Professor
□ Ph.D. U.C. San Diego 2000
□ Algebraic combinatorics, including symmetric functions and permutation statistics.
Web Source:http://www.math.uic.edu/~hanson/MCS-areas.html
Email Comments or Questions to Professor Hanson | {"url":"http://homepages.math.uic.edu/~hanson/MCS-areas.html","timestamp":"2024-11-10T02:20:36Z","content_type":"text/html","content_length":"4933","record_id":"<urn:uuid:39036be5-8d32-40b6-b962-f958cb67a6bd>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00759.warc.gz"} |
Relationship between the body mass index and the ponderal index with physical fitness in adolescent students
Background: The relationship between the Body Mass Index (BMI) with physical fitness in children and adolescent populations from diverse regions are consistent. However, the relationship between the
Ponderal Index (PI) with physical fitness, based on what is known to date, has not been examined in depth. The objective was to evaluate the relationships between BMI and PI with three physical
fitness tests of students living at moderate altitudes in Peru. Methods: A descriptive correlational study was carried out with 385 adolescents, between the ages of 10.0 to 15.9 years old, from the
province of Arequipa, Peru. Weight, height, and three physical fitness tests (horizontal jump, agility, and abdominal muscle resistance) were evaluated. BMI and PI were calculated, and they were,
then, categorized into three strata (low, normal, and excessive weight). Specific regressions were calculated for sex, using a non-lineal quadratic model for each item adjusted for BMI and PI.
Results: The relationship between BMI and PI with the physical tests reflected parabolic curves that varied in both sexes. The regression values for BMI in males oscillated between R^2 = 0.029 and
0.073 and for females between R^2 = 0.008 and 0.091. For PI, for males, it varied from R^2 = 0.044 to 0.82 and for females, from R^2 = 0.011 to 0.103. No differences occurred between the three
nutritional categories for BMI as well as for PI for both sexes (p range between 0.18 to 0.38), as well as for low weight (BMI vs PI), normal weight (BMI vs PI), and excessive weight (BMI vs PI) (p
range between 0.35 to 0.64). Conclusions: BMI showed inferior quadratic regressions with respect to the PI. In addition, physical performance was slightly unfavorable when it was analyzed by BMI. PI
could be a useful tool for analyzing and predicting physical fitness for adolescents living at a moderate altitude since it corrects for the notable differences for weight between adolescents.
• Adolescents
• Altitude
• Body mass index
• Physical aptitude
• Ponderal index
Dive into the research topics of 'Relationship between the body mass index and the ponderal index with physical fitness in adolescent students'. Together they form a unique fingerprint. | {"url":"https://cris.ucsm.edu.pe/en/publications/relationship-between-the-body-mass-index-and-the-ponderal-index-w","timestamp":"2024-11-08T15:39:10Z","content_type":"text/html","content_length":"55599","record_id":"<urn:uuid:221c314d-6a2d-410b-a1ca-f2ef1d904912>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00630.warc.gz"} |
Hooke's Law Equations Formulas Calculator - Spring Force Constant
Online Web Apps, Rich Internet Application, Technical Tools, Specifications, How to Guides, Training, Applications, Examples, Tutorials, Reviews, Answers, Test Review Resources, Analysis, Homework
Solutions, Worksheets, Help, Data and Information for Engineers, Technicians, Teachers, Tutors, Researchers, K-12 Education, College and High School Students, Science Fair Projects and Scientists
By Jimmy Raymond
Contact: aj@ajdesigner.com
Privacy Policy, Disclaimer and Terms
Copyright 2002-2015 | {"url":"https://www.ajdesigner.com/phphookeslaw/hookes_law_equation_spring_force_constant.php","timestamp":"2024-11-12T21:56:04Z","content_type":"text/html","content_length":"24505","record_id":"<urn:uuid:baf0a439-ba29-41da-82be-892590ac4ca3>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00415.warc.gz"} |
Numerical Modeling of an Advanced Semi-SWATH Hull in Calm Water and Regular Head Wave
• 1
Department of Mechanical Engineering, Sharif University of Technology, Tehran, 11155-9567, Iran
• 2
Department of Mechanical and Aerospace Engineering, Malek-Ashtar University of Technology, Esfahan, 1774-15875, Iran
A small waterplane area twin hull (SWATH) has excellent seakeeping performance and low wave-making resistance, and it has been applied to small working craft, pleasure boats, and unmanned surface
vehicles. However, with the increase in speed, the hydrodynamic resistance of SWATH will increase exponentially because of its large wet surface, followed by the uncomfortable situation of the
hull underwater part relative to the water level and in terms of high trim by stern and high sinkage. A way to improve this situation is to reduce the depth of the draft at high speeds to ensure
that all or a part of the volume of the submerged bodies is above the water level. Based on this idea, a new type of semi-SWATH hull form was analyzed in this paper. The two submerged bodies of
the SWATH have a catamaran boat shape. This paper employed Siemens PLM Star-CCM+ to study the hydrodynamic performance of an advanced semi-SWATH model. Bare-hull resistance was estimated for both
SWATH and CAT (CATAMARAN) modes in calm water. Moreover, the effect of fixed stabilizing fins with different angles on the vertical motions of the vessel in regular head waves was investigated
with an overset mesh approach. The vertical motion responses were estimated at different wave encounter frequencies, and the present numerical method results have been verified by already
published experimental data.
Article Highlights
• The numerical simulation of seakeeping performances for semi-SWATH hull has been carried out (in regular head wave and at different Froude numbers) and RAO diagrams for heave, pitch, and
vertical acceleration were depicted.
• The grid uncertainty study of CFD method for semi-SWATH hulls is studied.
• The effectiveness of fixed stabilizing fins, for control and reduction of semi-SWATH vertical motions under regular head wave, is studied.
• The numerical simulation of calm water performance for an advanced semi-SWATH hull at SWATH and CAT modes has been carried out by using CFD method and the weight fraction of each calm water
resistance components was studied.
Article Highlights
Abbreviations and Nomenclature a[CG]: Vertical acceleration at the Centre of gravity (m/s^2); B: Half beam of the vessel (m); BEM: Boundary Element Method CFD: Computational fluid dynamics; CFL:
Courant-Friedrichs-Lewy number; CG: Center of Gravity; C[f]: Friction coefficient; C[T]: Total resistance coefficient; C[W]: Wave resistance coefficient; DFBI: Dynamic Fluid Body Interaction; Fr:
Froude number $(V / \sqrt{g L})$ ; g: Acceleration due to gravity (m/s^2); GCI: Grid Convergence Index HRIC: High-Resolution Interface Capture; h[w]: Wave height (m); ITTC: International Towing
Tank Conference; k: Wave number (rad/m); L: Length between perpendiculars of the hull (m); MII: Motion-Induced Interruptions (1/min); MSI: Motion-Seasickness Incidence (%); RAO: Response
Amplitude Operators; R&D: Research and Development; SWATH: Small Waterplane Area Twin Hull; T: Draft of the vessel (m); T[e]: Wave encounter period (1/s); ∆t: Time step (s); U: Flow velocity (m/
s); u: Friction velocity (m/s) URANSE: Unsteady Reynolds-Averaged Navier-Stokes Equation; V: Speed of the vessel (m/s); v: Kinematic viscosity (m^2/s); w/wo: With or without; ∆x: Cell size
dimension (m); y: Absolute distance from the nearest wall (m); y+: Dimensionless wall distance (u^∗y/v); Z[a]: Heave motion amplitude (m); α: Volume fraction of water; β: Viscous interference
factor; (1 + K): Form factor; ζ[a]: Wave amplitude (m); θ[a]: Pitch motion amplitude (°); λ[w]: Wave length (m); ρ: Density of water (kg/m^3); τ: Wave resistance coefficient interference factor;
ω: Wave frequency (rad/s); ω^*: Dimensionless wave frequency (\omega $\sqrt{L / g})$ ; ∇: Volume of displacement (m^3)
• 1 Introduction
In recent years, improving the seakeeping performance has been one of the primary purposes of marine designers in the design of ships and floating structures. Environmental conditions during sea
voyages of a ship can have a significant effect on the operability of marine vehicles.
Given that a small waterplane area twin hull (SWATH) hull has excellent seakeeping performance compared with the other conventional crafts, it has been used for different applications. In
addition, numerical and experimental investigations have been carried out for the study and improvement of advanced SWATH hull performance in calm water and different wave conditions.
In addition to characteristics, such as the excellent seakeeping performance and low wave-making resistance of SWATH hull form, SWATHs are highly sensitive to weight distribution and dynamically
unstable when the relative speed increases. As regards motion resistance, their performances are in the same range as conventional hull forms, except at extremely low relative speeds, at which
the large wet surface worsens their performances. With regard to sea travels, the pitch response is of particular concern because, notably, large encounter periods that are close to the pitch
resonance are likely to occur over a wide range of wavelengths. With the increase in speed, the hydrodynamic resistance of SWATH will increase exponentially because of its large wet surface,
followed by the uncomfortable situation of the hull underwater part relative to the water level and in terms of high trim by stern and high sinkage. A way to improve this situation is to reduce
the depth of the draft at high speeds to ensure that all or a part of the volume of submerged bodies is above the water level. Based on this idea, several new types of semi-SWATH hull forms have
been developed. In addition, based on the design purposes of crafts and their specific missions, different types of HYBRID SWATH hull were designed and studied for the improvement of standard
SWATH hull form performance at high speeds in calm water and rough seas. Most designs concern the replacement of standard SWATH submerged bodies with a catamaran, planing, and other hull shapes
to cope with the natural disadvantages of the SWATH hull form. In SWATH-CAT hull form, the two submerged bodies of the SWATH have a catamaran shape. Therefore, at a low draft, the SWATH-CAT
behaves as a catamaran, and at higher drafts, the SWATH mode will be used. Based on the different modes of operation (SWATH or CAT mode), the vessel has different performances, and this
flexibility in features compared with standard SWATH or catamaran hull forms will give interesting characteristics for most of SWATH application fields (Dubrovsky and Lyakhovitsky 2001; Dubrovsky
et al. 2007). The standard SWATH hull form also has numerous applications in offshore industries. The buoyancy force is provided by two torpedo-like submerged hulls below the water level. Struts
connect the lower hulls to the transverse structures above the water level. The struts have a small waterplane area, consequently making the vessel less sensitive to the wave impacts compared
with other conventional vessels, such as mono-hulls and catamarans (Gupta and Schmidt 1986).
In 2001, a new catamaran hull form was developed by BMT Nigel Gee in the UK, which led to the semi-SWATH (CAT mode) hull form technology. In this new technology, the waterplane is more
constricted, and the height of the center of buoyancy has lower values. Moreover, the bulbous bow has a slender shape (Yun et al. 2018). The semi-SWATH technology fills the gaps in the
hydrodynamics performance between the catamaran and SWATH. Hence, the motion-seasickness incidence and motion-induced interruptions of the semi-SWATH decrease compared with those of the
catamaran. Conversely, the SWATH requires a significant power in comparison with the semi-SWATH (Jupp et al. 2014). In other words, the semi-SWATH technology is essentially an attempt to show the
benefits of the catamaran and SWATH and to avoid their drawbacks.
The first research about numerical optimization of SWATH hull form was carried out by Salvesen et al. (1985), who presented and developed a computational method for wave resistance minimization.
Chan (1993) investigated the motion and dynamic structural responses of antisubmarine rescue catamaran and 3000-ton SWATH by a 3D linearized potential theory related to a cross-flow method with
consideration of viscous effects. Campana and Peri (2000) introduced a new hull form with a medium waterplane area called MWATH by reforming the strut shape and underwater gondolas of a SWATH
hull. Guttenplan (2007) studied the performance of a prototype 10000 kg reduced waterplane area twin hull, including the effect of variable demi-hull separation on the resistance in calm water
and quasi-active foil control on the motion responses in waves by Rankine panel numerical method using SWAN2 2002 software package. Brizzolara et al. (2015) studied the hydrodynamic performance
of unconventional SWATH and semi-SWATH with numerical methods. In their research, the resistance force in calm water was calculated using the boundary element method (BEM) combined with viscous
effects and multiphase unsteady Reynolds-averaged Navier-Stokes equation (URANSE) solver. Heave and pitch responses of two hull forms were also obtained in regular head wave conditions by a
frequency domain 3D panel method. Begovic et al. (2015) provided a detailed report on the computational fluid dynamics (CFD) numerical method for the hydrodynamic assessment of the SWATH concept;
their study was conducted in CD Adapco Star-CCM+ using the overset mesh approach. For a semi-SWATH hull form, the influence of stabilizing fins on the resistance in calm water has been
experimentally carried out by Ali et al. (2015). The semi-SWATH model was tested with fixed fore fins at 0° and aft fin angle adjustable to 0°, 5°, and 15° at the range of Froude numbers from
0.34 to 0.69. Their evaluations showed that the fluid flow around the hull at different speeds was under the influence of the fin angle, and this dependence varied based on the Froude number.
Vernengo and Bruzzone (2016) studied and compared the calm water resistance and seakeeping quality of a new semi-SWATH design and a classical single-strut SWATH hull using the numerical BEM
developed by Bruzzone (1994, 2003). Furthermore, the authors investigated the effect of passive stabilizing fins on heave and pitch responses of a full-scale conventional SWATH hull, which has
been experimentally tested by Kallio (1976). Wang et al. (2016), by presenting a SWATH planing unmanned surface vessel and using the CFD method, numerically simulated this vessel at various
velocities and showed that given the reduction of the wetted surface and the production of the desired lift force, the resistance can be significantly reduced at high speeds. Sun et al. (2016)
investigated the seakeeping performance of a slender catamaran with a semisubmerged bow using the CFD method. In this study, based on the overset mesh and motion region techniques, motion
responses of the vessel in regular head waves were estimated at various wavelengths and speeds. They showed that the overset mesh technique is precise in predicting motions. Vernengo and
Brizzolara (2017), based on previous studies about SWATH optimization (Brizzolara and Vernengo 2011), presented a systematic evaluation of the influence of various forms and canting angles of
struts. Vernengo et al. (2018) evaluated the motion responses of three hull forms, including SWATH, catamaran, and trimaran, by applying a first-order 3D BEM. Bonfiglio et al. (2018) studied the
seakeeping performance of SWATH design using multifidelity Gaussian process regression and Bayesian optimization. Their studies indicated the excellent features of this optimization framework in
modeling and identifying optimal alternative designs with a remarkable reduction in computational costs. Begovic et al. (2019) performed a broad experimental study on four different SWATH model
hull forms in calm water and regular and irregular waves. The scope of their research was the calculation of resistance in calm water and heave, pitch, and vertical acceleration RAO diagrams in
waves. Pérez-Arribas and Calderon-Sanchez (2020) introduced a method based on a parametric computer to design a B-spline model of a SWATH hull with the use of Chebyshev functions. In this new
technique, variables, including displacement (Δ), waterplane area (A[wp]), center of buoyancy, and center of flotation, can be controlled.
In this paper, the CFD numerical method by Siemens PLM Star-CCM+ was used to predict resistance force in calm water and simulation of vertical motions in regular head waves for an advanced
semi-SWATH model. In calm water, the total resistance for SWATH and CAT modes was calculated and compared with available experimental data. The effect of fixed stabilizing fins on the reduction
of heave and pitch motions in regular head wave conditions for a specific encounter frequency was also investigated, and the semi-SWATH bare-hull vertical motions in regular head waves, including
heave, pitch, and vertical acceleration RAO, were estimated at four different Froude numbers and wide wave frequency range.
2 Main Characteristics of Semi-SWATH Hull Form and Fixed Stabilizing Fins
Table 1 shows the main particulars of the semi-SWATH model used in the present study. The selected hull form model for the present study was based on Yaakob and Mekanikal (2006) experimental
investigation; the same model has been used. Table 2 provides the main particulars of the stabilizing fins. Figure 1 shows the semi-SWATH model equipped with stabilizing fins.
Properties Value
Length of the main hull (m) 2.31
Maximum beam (m) 0.80
Draft (SWATH mode) (m) 0.20
Draft (CAT mode) (m) 0.14
Displacement (SWATH mode) (kg) 76.877
Displacement (CAT mode) (kg) 53.176
Radius of gyration for pitch (m) 0.578
Longitudinal center of gravity abaft midships (m) 0.089
Maximum speed (m/s) 3.25
Properties Fore fin Aft fin
Chord (m) 0.096 0.145
Span (m) 0.120 0.186
Longitudinal location^1 (m) 1.95 0.35
Vertical location^2 for SWATH mode (m) 0.151 0.151
Vertical location^2 for CAT mode (m) 0.092 0.092
Maximum thickness (m) 0.015 0.023
Fin type NACA–0015
^1Distance from the main hull stem to the fin quarter–chord point
^2Distance from the water level to the chord line
3 Numerical Simulation Procedures Description (Physical Setup, Modeling, and Grid Uncertainty Analysis)
3.1 Governing Equations and Numerical Simulation Setup
In this paper, simulations were carried out using the finite volume method for the solution of URANSE (Ferziger and Perić 2002):
$$ {\displaystyle \begin{array}{l}\frac{\partial \left(\rho {\overline{u}}_i\right)}{\partial {x}_i}=0\\ {}\begin{array}{l}\frac{\partial \left(\rho {\overline{u}}_i\right)}{\partial t}+\frac
{\partial }{\partial {x}_j}\left(\rho {\overline{u}}_i{\overline{u}}_j+\rho \overline{u_i^{\prime }{u}_j^{\prime }}\right)=-\frac{\partial \overline{p}}{\partial {x}_i}+\frac{\partial {\ (1)
overline{\tau}}_{ij}}{\partial {x}_j}\\ {}\frac{\partial \alpha }{\partial t}+{\overline{u}}_i\frac{\partial \alpha }{\partial {x}_i}=0\end{array}\end{array}} $$
The governing equations include the continuity, momentum, and volume fraction transport equations for incompressible flows. Here, $ {\overline{\tau}}_{ij} $ is the mean viscous stress tensor
$$ {\overline{\tau}}_{ij}=\mu \left(\frac{\partial {\overline{u}}_i}{\partial {x}_j}+\frac{\partial {\overline{u}}_j}{\partial {x}_i}\right) $$ (2)
Variable ρ is the mixture density, $ {\overline{u}}_i $ is the averaged Cartesian components of the velocity vector in the x[i] direction (i, j = 1, 2, 3), $ \rho \overline{u_i^{\prime }{u}_j^{\
prime }} $ is the Reynolds stress, $ \overline{p} $ is the mean pressure field, μ is the dynamic viscosity, and (α) represents the volume fraction of water inside each cell.
The component of $ \rho \overline{u_i^{\prime }{u}_j^{\prime }} $ was obtained based on the Boussinesq approximation by the k–ε turbulence model selected for this simulation:
$$ {\displaystyle \begin{array}{l}\frac{\partial (k)}{\partial t}+\frac{\partial \left({\overline{u}}_jk\right)}{\partial {x}_j}=\frac{\partial }{\partial {x}_j}\left[\left(v+\frac{v_t}{\
sigma_k}\right)\frac{\partial k}{\partial {x}_i}\right]+{P}_k-\varepsilon \\ {}\frac{\partial \left(\varepsilon \right)}{\partial t}+\frac{\partial \left({\overline{u}}_j\varepsilon \right)} (3)
{\partial {x}_j}=\frac{\varepsilon }{k}\left({C}_{\varepsilon 1}{P}_k-\rho {C}_{\varepsilon 2}\varepsilon \right)+\frac{\partial }{\partial {x}_j}\left[\left(v+\frac{v_t}{\sigma_{\varepsilon
}}\right)\frac{\partial \varepsilon }{\partial {x}_j}\right]\end{array}} $$
where v[t] = C[μ]k^2/ε is the eddy viscosity; k is the turbulent kinetic energy; ε is the dissipation term of turbulent kinetic energy; P[k] is the production of turbulent kinetic energy; and C[μ
] = 0.9, C[ε1] = 1.44, C[ε2] = 1.92, σ[k] = 1.0, σ[ε] = 1.3 are model constants.
Frisk and Tegehall (2015) noted that the standard k–ε model is robust and gives accurate results for free surface simulations for completely turbulent flows. Table 3 summarizes the numerical
simulation setup summary. The volume of fluid (VOF) method was employed with high-resolution interface capturing (HRIC). The primary application of the implicit unsteady solver is the detection
and management of the region of all unknown hydrodynamic values with an iterative solver for each time step. Only the unsteady solver can be combined with the segregated flow model (Voxakis 2012
). A minimum of 10 inner iterations for each time step was used.
Parameter Settings
Turbulence model Standard k–ε
Continuity and momentum equation coupling SIMPLE - Algorithm
Method Segregated flow
Solver 3D, implicit unsteady
Multiphase model VOF
Time step Equations (5) and (6)
Time discretization First order
Convection scheme for VoF HRIC
Iterations per time step 10
The convective Courant number (CFL) relates the time step (∆t) to the grid flow velocity (U) and the cell size dimension (∆x) as follows:
$$ \mathrm{CFL}=U\frac{\Delta t}{\Delta x}\kern0.5em $$ (4)
In general, the CFL should be less than or equal to 1 to achieve the desired results and stability of the numerical solution. Reducing this value gives numerical solution stability despite the
increase in computational cost.
In implicit unsteady simulations, the time step value was obtained based on the flow properties. Therefore, the time step employed in the calm water condition, as a function of vessel speed (V)
and length between perpendiculars (L), was determined in accordance with the ITTC (2011):
$$ \Delta {t}_{\mathrm{Calm}\ \mathrm{water}}=0.005-0.01\frac{L}{V} $$ (5)
In wave condition simulation, at least 100 time steps for each wave encounter period (T[e]) were utilized, as suggested by ITTC (2011) and another relevant study (Tezdogan et al. 2015).
Therefore, for this condition, the time step size was computed as follows:
$$ \Delta {t}_{\mathrm{Wave}}=\frac{T_e}{100} $$ (6)
3.2 Computational Domain and Boundary Conditions
Figures 2 and 3 illustrate the computational domain dimensions and applied boundary conditions, respectively. The domain dimensions must be large enough to achieve high precision and reliable
numerical results. In Figure 2, L, B, and T are the length between perpendiculars, half beam, and draft of the vessel, respectively. For simulations of free surface with incident waves, the
computational domain is extended 1.5L upstream of the hull, which is defined as an inlet face, and 5L downstream, which is considered an outlet face, to avoid any wave reflections according to
the ITTC (2011) recommendations. Similar previous works were also studied to set up the appropriate location of computational domain faces (Table 4). Given the symmetry, only half of the model
hull was modeled. Therefore, the symmetry plane condition of the longitudinal centerline was considered. Other boundary conditions were defined as follows. The inlet, side, top, and bottom faces
were set to velocity inlet as a field function of volume fraction and velocity. The outlet face was imposed on the pressure outlet as a field function of volume fraction and pressure. The vessel
hull surface was defined as a no-slip wall condition.
Reference Inlet face Outlet face Top face Bottom face Side face
Tezdogan et al. (2015) 1.15L 4.5L L 2.3L 2.5L
Sun et al. (2016) 1.5L 3L L 2L L
Kahramanoğlu et al. (2020) 2.75L 7.75L 0.9L 1.9L 3L
In calm water, the computational domain includes the stationary domain only, and the vessel is fixed to heave and pitch. Given that only the resistance force prediction is targeted, for the
simulation in wave conditions, the computational domain was specified in two regions to predict vessel responses, including background (stationary domain) and overset (moving domain) regions.
Therefore, during the simulation in wave conditions, the vessel was free to heave and pitch motions and move with the overset region at each time step. For this purpose, a setting called dynamic
fluid body interaction was employed to simulate the interactions between the fluid and rigid body.
A linear interpolation approach was employed to control the numerical data transfer between the two regions. In the overset method, no specific recommendation concerns the size of the overset
region dimensions. However, dimensions with a sufficient number of cells between the surface boundary of the vessel hull inside the overset and background regions are acceptable.
3.3 Grid Generation
The grid division was performed using the automated mesh technique in the Siemens PLM Star-CCM+ package. Both simulation conditions (calm water and wave) were accomplished with an unstructured
hexahedral cell (trimmed) mesher, which is suitable for solving complex problems, especially at the free surface. The mesh generation process was conducted based on dimensions, including the
total thickness of prism layers and maximum and minimum sizes of grid cells in the desired areas (surface and volume controls), especially around the hull surface with fins and the free surface
area as a percentage of base size.
When employing the overset mesh approach, a refinement area called the "overlapping zone, " where the numerical data are exchanged between the stationary and moving domains through the
overlapping zone grids, must be used. Siemens PLM Star-CCM+ user guide (Siemens 2019) provided suggestions on how to set appropriate grid cells in the overlapping zone. Figure 4 depicts the 3D,
top, side, and front views of grid division in the computational domain.
3.3.1 Grid Independence Analysis
A grid independence analysis is an essential issue for numerical simulations. All the physical phenomena related to flow around the vessel must be modeled with the desired quality. Therefore, the
sensitivity of the numerical results concerning cell number and dimensionless wall distance (y+=u^∗y/v, where u^∗ is the friction velocity, y is the absolute distance from the nearest wall, and v
is the kinematic viscosity) for proper turbulence modeling should be investigated. For this purpose, four types of cell numbers, namely, 0.9 × 10^6 (A-type), 1.27 × 10^6 (B-type), 1.8 × 10^6
(C-type), and 2.55 × 10^6 (D-type), were selected for the simulation in calm water, and another four types, such as 2.50 × 10^6 (A′-type), 3.53 × 10^6 (B′-type), 5.0 × 10^6 (C′-type), and 7.1 ×
10^6 (D′-type), were used for wave conditions. A high-y+ treatment model was employed to keep the low numbers of grid cells. In this particular approach, near-wall y+ was kept at a value higher
than 30 as recommended by Siemens PLM Star-CCM+ user guide (Siemens 2019). A good range was between 60 and 130, as reported by Cucinotta et al. (2018). Figures 5 and 6 show the numerical results
in calm water (total resistance) and wave (heave and pitch amplitudes), respectively, for the bare-hull as a function of cell number for two different values of y+ approximately 70 and 140. The
cell numbers of 1.8 × 10^6 and 3.53 × 10^6 with y+ of 70 were selected for the simulation in calm water and wave conditions, respectively. Table 5 contains the number of cells in both overset and
background regions for each simulation.
Simulation Region
Background Overset Total
Calm water 1.80 × 10^6 − 1.80 × 10^6
Wave condition 1.35 × 10^6 2.18 × 10^6 3.53 × 10^6
3.3.2 Grid Uncertainty Estimation
The grid (U[G]), time step (U[TS]), and iterative (U[I]) uncertainties are the main sources of numerical uncertainty (U[SN]):
$$ {U_{\mathrm{SN}}}^2={U_G}^2+{U_{\mathrm{TS}}}^2+{U_I}^2 $$ (7)
Among the main sources of numerical uncertainties mentioned above, grid uncertainty has the greatest impact, as reported by Wilson et al. (2001) and De Luca et al. (2016).
Special attention unless commonly used in your field, we suggest providing the meaning of acronyms at first mention in both abstract and main text. However, acronyms are not included when terms
are mentioned only once in the paper. For this purpose, the difference between any solution scalars (ε) can be computed as follows:
$$ {\varepsilon}_{BC}={\varphi}_B-{\varphi}_C,\kern0.5em {\varepsilon}_{AB}={\varphi}_A-{\varphi}_B\kern0.5em $$ (8)
where φ[A], φ[B], and φ[C] refer to the value of any scalar of coarse, medium, and fine-grid size, respectively. The convergence ratio was utilized to evaluate the convergence condition (R), as
given in Eq. (9):
$$ R=\frac{\varepsilon_{BC}}{\varepsilon_{AB}}\kern0.5em $$ (9)
According to the ITTC (2002) guidelines and Stern et al. (2006), three convergence conditions are defined as follows:
$$ \left\{\begin{array}{l}\mathrm{Monotonic}\ \mathrm{convergence}:\\ {}\begin{array}{l}\mathrm{Oscillatory}\ \mathrm{convergence}:\\ {}\mathrm{Divergence}:\end{array}\end{array}\kern0.5em \ (10)
begin{array}{c}0 \lt R \lt 1\\ {}\begin{array}{c}-1 \lt R \lt 0\\ {}R \lt -1,R \gt 1\end{array}\end{array}\right. $$
The apparent order (p) can be obtained as follows (Celik et al. 2008):
$$ {\displaystyle \begin{array}{c}p=\frac{1}{\ln {r}_{BC}}\left|\ln \left|{\varepsilon}_{AB}/{\varepsilon}_{BC}\right|+q(p)\right|\\ {}\begin{array}{c}q(p)=\ln \left(\frac{r_{BC}^P-s}{r_{AB} (11)
^P-s}\right)\\ {}s=\operatorname{sign}\ \left({\varepsilon}_{AB}/{\varepsilon}_{BC}\right)\end{array}\end{array}} $$
Table 6 presents the other terms related to the GCI estimation, including the extrapolated value, approximate relative error, extrapolated relative error, and fine-grid convergence index (Celik
et al. 2008).
Parameter Definition
${\varphi}_{\mathrm{ext}}^{BC}=\frac{r_{BC}^P{\varphi}_C-{\varphi}_B}{r_{BC}^P-1}$ Extrapolated value
$e_{a}^{B C}=\left|\frac{\varphi_{C}-\varphi_{B}}{\varphi_{C}}\right|$ Approximate relative error
$e_{\text {ext }}^{B C}=\left|\frac{\varphi_{e x t}^{C B}-\varphi_{C}}{\varphi_{e x t}^{C B}}\right|$ Extrapolated relative error
${\mathrm{GCI}}_{\mathrm{Fine}}^{BC}=\frac{1.25{e}_a^{BC}}{r_{BC}^P-1}$ Fine-grid convergence index
Tables 7 and 8 present the uncertainties of calm water total resistance and heave and pitch amplitudes in regular head wave conditions for the semi-SWATH model, respectively. As shown in Table 7,
in both A-B-C and B-C-D cases, the total resistance converged monotonically. In Table 8, the grid uncertainty study showed monotonic convergence for both heave and pitch amplitudes in the case of
A′-B′-C′, whereas for the case of B′-C′-D′, the pitch amplitude displays oscillatory convergence. After the GCI method assessment, the CFD simulations in calm water and wave conditions were
carried out with C- and B′-type grids, respectively.
Calm water
Parameter (Fr = 0.68)
Case of A-B-C Case of B-C-D
r[BC], r[AB] $\sqrt{2}$ $\sqrt{2}$
φ[A] 65.00 66.80
φ[B] 66.80 67.42
φ[C] 67.42 67.63
R 0.3444 0.3387
p 3.0753 3.1237
${\varphi}_{\mathrm{ext}}^{BC}$ 67.7457 67.7375
${e}_{\mathrm{ext}}^{BC}$ 0.4831% 0.1590%
${e}_a^{BC}$ 0.9196% 0.3105%
${\mathrm{GCI}}_{\mathrm{Fine}}^{BC}$ 0.6040% 0.1988%
Parameter Wave condition
(λ[w]/L = 1.8, Fr = 0.512)
Case of A′-B′-C′ Case of B′-C′-D′
Heave (m) Pitch (°) Heave (m) Pitch (°)
r[BC], r[AB] $\sqrt{2}$ $\sqrt{2}$ $\sqrt{2}$ $\sqrt{2}$
φ[A′] 0.0350 1.450 0.0395 2.273
φ[B′] 0.0395 2.273 0.0410 2.310
φ[C′] 0.0410 2.310 0.0412 2.285
R 0.3333 0.0449 0.1333 −0.67
p 3.1699 8.9506 5.8138 1.1311
${\varphi}_{\mathrm{ext}}^{B\prime \mathrm{C}\prime }$ 0.0417 2.3117 0.0412 2.2329
${e}_{\mathrm{ext}}^{B\prime \mathrm{C}\prime }$ 1.8292% 0.0754% 0.0747% 2.2793%
${e}_a^{B\prime \mathrm{C}\prime }$ 3.6585% 1.6017% 0.4854% 1.0941%
${\mathrm{GCI}}_{\mathrm{Fine}}^{B\prime \mathrm{C}\prime }$ 2.2865% 0.0942% 0.0933% 2.8492%
4 Performance Evaluation in Calm Water and Regular Head Waves
Figure 7 illustrates the numerical simulation steps of the semi-SWATH model in calm water and wave conditions at a glance.
4.1 Resistance in Calm Water
The total resistance coefficient (C[T]) for multihull vessels can be written as follows:
$$ {C}_T=\tau {C}_W+\left(1+\beta k\right){C}_f\kern0.5em $$ (12)
where k is the form factor, which was assumed similar for the single and multihull analyses. τ is the wave resistance interference factor. β is the viscous resistance interference factor.
As shown in Figure 8, the total resistance–weight ratio (R[T]/∆) of the present work was compared with the numerical results from Ali et al. (2014) research and available experimental data for
the semi-SWATH model in SWATH and CAT modes. The available experimental data were measured in a towing tank at the Marine Technology Center of Universiti Teknologi Malaysia (UTM), as reported by
Ali et al. (2014).
Table 9 shows the percentage of error between the CFD results of the present work and the available experimental data at different Froude numbers from 0.31 to 0.68.
Fr Total resistance–weight ratio (R[T]/∆)
CAT mode SWATH mode
Present work Towing tank of UTM Error (%) Present work Towing tank of UTM Error (%)
0.31 0.022 0.023 −4.35 0.021 0.022 −4.54
0.48 0.059 0.058 1.72 0.072 0.070 2.86
0.56 0.077 0.085 −9.41 0.094 0.101 −6.93
0.68 0.073 0.084 −13.10 0.089 0.092 −3.26
Figure 8 shows that the presented CFD results are in good correlation with the experimental data compared with the CFD results of Ali et al. (2014) due to the high quality of generated grid cells
on the hull surfaces and control volumes in the required wake areas. At Fr > 0.56, the presented numerical method underestimated the total resistance–weight ratio for SWATH and CAT modes. The
maximum error between the results of the present work and experimental data approximated 6.93% and 13.10% (Table 9) for the SWATH and CAT modes, respectively. The main reason for the relatively
high discrepancies of the results at 𝐹𝑟 > 0.56 can be the fixed heave and pitch motions of the vessel in numerical simulations of the present work. In general, the comparison of the total
resistance–weight ratio of SWATH and CAT modes indicated the advantages of the semi-SWATH hull with respect to the standard SWATH hull form.
The curves (dashed and dot-dash) estimated in Figure 9 indicate that the vessel at SWATH mode has higher viscous resistance (R[v]) than the CAT mode due to the higher wetted surface. Figure 9
shows that the wave resistance (R[w]) for CAT and SWATH modes at the Froude number of 0.56 peaked and then reduced. Figure 10a, b, c, and d show the cross-comparison of wave pattern predicted by
the CFD method in calm water for SWATH and CAT modes at Froude numbers from 0.31 to 0.68. Figure 10 demonstrates that the vessel at SWATH mode generated waves with relatively large peaks and
troughs with respect to the CAT mode on the inner and outer sides of the struts for different velocities, consistent with the finding in Figure 9 (green and red curves). Therefore, the main
reason for total resistance reduction of the vessel for the SWATH and CAT modes at Fr > 0.56 (as depicted in Figure 8) was the reduction of wave resistance, which is consistent with the findings
of Brizzolara et al. (2015).
4.2 Motion in Wave
In this section, heave and pitch motions of the semi-SWATH model with and without fixed stabilizing fins were obtained. First, simulations were carried out based on Yaakob and Mekanikal (2006)
experimental test on regular head wave conditions with wavelength λ[w]/L = 1.8 and wave steepness h[w]/λ[w] = 0.02 at a Froude number of 0.512. The fore fin angle was fixed at 15°, whereas the
aft fin angle was varied to 5°, 10°, and 15° in accordance with the work of Yaakob and Mekanikal (2006). The stall effect (lift breakdown) occurred at more than 0.4 rad (23°) for the NACA-0015
series, as reported by Whicker and Fehlner (1958) and Gregory (1973).
Before obtaining the heave and pitch motions of the model, a wave elevation calibration test was performed by a wave probe, which was located between the inlet face and the model in the
computational domain. The time series of the elevation of generated and sinusoidal waves with the same encounter frequency were compared (Figure 11). A slight phase delay was observed between the
generated and sinusoidal waves. The difference in the amplitude between the generated and sinusoidal waves was about 1.86%, which indicates that the grid cell size and time step utilized are
reasonable for the current simulation model (Tezdogan et al. 2015).
Figure 12 illustrates the comparison of numerical results and experimental data. The origin of the slight phase delay between the present results and experimental data (in all plots) was due to
an error in the generated wave by the numerical method.
Table 10 shows the maximum value of the heave and pitch motion reduction of the vessel equipped with stabilizing fins compared with the bare-hull case. The results showed that the best condition
for reducing the heave motion was at an aft fin angle of 10°, whereas the best condition for reducing the pitch motion was at an aft fin of 15°. Thus, fixed stabilizing fins are practical tools
for reducing vertical motions.
Fore fin angle (°) Aft fin angle (°) Heave motion reduction (%) Pitch motion reduction (%)
15 5 20.32 16.98
15 10 34.54 23.18
15 15 32.08 35.17
The heave, pitch, and vertical acceleration RAO diagrams of the semi-SWATH bare-hull model in regular head wave conditions with constant height h[w] = 0.0857 m at Froude numbers ranging from 0 to
0.51 were obtained based on the cases mentioned in Table 11. The corresponding equations in dimensionless forms of RAO are given below:
$$ {\displaystyle \begin{array}{c}{\mathrm{RAO}}_{\mathrm{heave}}=\frac{Z_a}{\zeta_a}\kern0.5em \\ {}{\mathrm{RAO}}_{\mathrm{pitch}}=\frac{\theta_a}{k{\zeta}_a}\\ {}{\mathrm{RAO}}_{\mathrm (13)
{acc}}=\frac{aL}{g{\zeta}_a}\end{array}} $$
Case Wave/vessel length $\left(\frac{\lambda_w}{L}\ Wave steepness $\left(\frac{h_w}{\lambda_w}\ Wave frequency (ω) (rad Dimensionless wave frequency $\left({\omega}^{\ast }=\omega \sqrt{L/
No. right)$ right)$ /s) g}\right)$
1 0.85 0.0436 5.6 2.72
2 1 0.0370 5.16 2.50
3 1.4 0.02s65 4.36 2.12
4 1.8 0.0206 3.84 1.87
5 2 0.0185 3.65 1.77
6 2.19 0.0169 3.49 1.70
7 2.4 0.0154 3.33 1.62
8 3 0.0123 2.98 1.44
9 4.5 0.0082 2.43 1.18
10 6.5 0.0057 2.02 0.98
11 9 0.0041 1.72 0.83
where Z[a] is the amplitude of heave motion, ζ[a] = h[w]/2 is the wave amplitude, θ[a] is the amplitude of pitch motion, k is the wavenumber, a is the vertical acceleration at the center of
gravity (CG) or bow, and g is the gravitational acceleration.
Figure 13 shows the heave and pitch RAO diagrams versus the dimensionless wavelength (λ[w]/L) at four different Froude numbers.
Using Figure 13, the following results were obtained:
1) As the speed increased, the level of heave RAO increased, whereas the level of pitch RAO increased first and then decreased at Fr > 0.17.
2) At Fr = 0.34, 0.51, a double-peak trend was found for the pitch RAO of the vessel. At both Froude numbers, the first pitch motion peak displayed lower and nearly appears in the proximity of λ[
w]/L = 2 (ω^∗ ≅ 1.77). The second peak had a higher value and occurred in the range [4.5–6.5] of the dimensionless wavelength (0.98 ≤ ω^∗ ≤ 1.18). In general, the presence of the peaks can be due
to the resonance caused by the coupling of heave and pitch motions.
3) At Fr = 0.34, at a dimensionless wavelength of about λ[w]/L ≅ 2.19 (in frequency about ω^∗ ≅ 1.70), when the heave RAO reached its peak, the pitch RAO was at its minimum value close to 0. This
event occurred when the vessel was periodically at the crest or trough of the incoming waves. In this interesting phenomenon, which can be called "pitch cancelation, " almost no moment is applied
to the semi-SWATH vessel.
Figure 14 shows the vertical accelerations at CG and bow in the dimensionless form of the RAO diagram at different Froude numbers. In general, increasing the speed can have two effects on the
vertical acceleration responses. First, the maximum vertical acceleration will increase with speed for the constant wavelength (at a wavelength of about λ[w]/L > 1.4), and secondary the resonance
frequency will be at long wavelengths for higher speeds.
5 Conclusion Remarks
The numerical simulation of an advanced semi-SWATH hull in calm water and regular head wave has been done, and the effectiveness of fixed stabilization fins for control and reduction of SWATH
vertical motion has also been evaluated. The numerical results also compared with available experimental data, and as a result, the following conclusions can be obtained:
1) By comparing the numerical results and already published experimental data, the CFD numerical method has acceptable accuracy for the prediction of vertical motions of semi-SWATH hulls.
2) Double-peaked RAO were found for pitch motion at Fr = 0.34, 0.51. The appearance of the peaks may be due to the resonance phenomenon caused by heave and pitch coupling.
3) When the semi-SWATH vessel was subjected to incoming waves with a length of about 2.19R and the vessel length at Fr = 0.34, almost no moment was imposed on it. This interesting phenomenon in
which only heaving motion is present can be called "pitch cancelation."
4) Based on the presented results, fixed stabilizing fins can also be used for vertical motion reduction of semi-SWATH vessels under head wave conditions.
5) For future research, the calculation of hydrodynamic coefficients of the semi-SWATH vessel using CFD methods and the development of mathematical models for the assessment of the efficiency of
active stabilizing fins by modern optimal control theory is intended.
Figure 8 Computed resistance–weight ratio using the CFD method (present work and Ali et al. (2014)) in comparison with available experimental data at different Froude numbers from 0.31 to 0.68
Table 1 Main particulars of the semi-SWATH model (Yaakob and Mekanikal 2006)
Properties Value
Length of the main hull (m) 2.31
Maximum beam (m) 0.80
Draft (SWATH mode) (m) 0.20
Draft (CAT mode) (m) 0.14
Displacement (SWATH mode) (kg) 76.877
Displacement (CAT mode) (kg) 53.176
Radius of gyration for pitch (m) 0.578
Longitudinal center of gravity abaft midships (m) 0.089
Maximum speed (m/s) 3.25
Table 2 Main particulars of stabilizing fins (Yaakob and Mekanikal 2006)
Properties Fore fin Aft fin
Chord (m) 0.096 0.145
Span (m) 0.120 0.186
Longitudinal location^1 (m) 1.95 0.35
Vertical location^2 for SWATH mode (m) 0.151 0.151
Vertical location^2 for CAT mode (m) 0.092 0.092
Maximum thickness (m) 0.015 0.023
Fin type NACA–0015
^1Distance from the main hull stem to the fin quarter–chord point
^2Distance from the water level to the chord line
Table 3 Numerical simulation setup summary
Parameter Settings
Turbulence model Standard k–ε
Continuity and momentum equation coupling SIMPLE - Algorithm
Method Segregated flow
Solver 3D, implicit unsteady
Multiphase model VOF
Time step Equations (5) and (6)
Time discretization First order
Convection scheme for VoF HRIC
Iterations per time step 10
Table 4 Location of computational domain faces in similar previous works
Reference Inlet face Outlet face Top face Bottom face Side face
Tezdogan et al. (2015) 1.15L 4.5L L 2.3L 2.5L
Sun et al. (2016) 1.5L 3L L 2L L
Kahramanoğlu et al. (2020) 2.75L 7.75L 0.9L 1.9L 3L
Table 5 Number of cells for calm water and wave condition simulations
Simulation Region
Background Overset Total
Calm water 1.80 × 10^6 − 1.80 × 10^6
Wave condition 1.35 × 10^6 2.18 × 10^6 3.53 × 10^6
Table 6 GCI estimation-related parameters (Celik et al. 2008)
Parameter Definition
${\varphi}_{\mathrm{ext}}^{BC}=\frac{r_{BC}^P{\varphi}_C-{\varphi}_B}{r_{BC}^P-1}$ Extrapolated value
$e_{a}^{B C}=\left|\frac{\varphi_{C}-\varphi_{B}}{\varphi_{C}}\right|$ Approximate relative error
$e_{\text {ext }}^{B C}=\left|\frac{\varphi_{e x t}^{C B}-\varphi_{C}}{\varphi_{e x t}^{C B}}\right|$ Extrapolated relative error
${\mathrm{GCI}}_{\mathrm{Fine}}^{BC}=\frac{1.25{e}_a^{BC}}{r_{BC}^P-1}$ Fine-grid convergence index
Table 7 Grid uncertainty study for total resistance in calm water
Calm water
Parameter (Fr = 0.68)
Case of A-B-C Case of B-C-D
r[BC], r[AB] $\sqrt{2}$ $\sqrt{2}$
φ[A] 65.00 66.80
φ[B] 66.80 67.42
φ[C] 67.42 67.63
R 0.3444 0.3387
p 3.0753 3.1237
${\varphi}_{\mathrm{ext}}^{BC}$ 67.7457 67.7375
${e}_{\mathrm{ext}}^{BC}$ 0.4831% 0.1590%
${e}_a^{BC}$ 0.9196% 0.3105%
${\mathrm{GCI}}_{\mathrm{Fine}}^{BC}$ 0.6040% 0.1988%
Table 8 Grid uncertainty study for heave and pitch amplitudes in regular head wave
Parameter Wave condition
(λ[w]/L = 1.8, Fr = 0.512)
Case of A′-B′-C′ Case of B′-C′-D′
Heave (m) Pitch (°) Heave (m) Pitch (°)
r[BC], r[AB] $\sqrt{2}$ $\sqrt{2}$ $\sqrt{2}$ $\sqrt{2}$
φ[A′] 0.0350 1.450 0.0395 2.273
φ[B′] 0.0395 2.273 0.0410 2.310
φ[C′] 0.0410 2.310 0.0412 2.285
R 0.3333 0.0449 0.1333 −0.67
p 3.1699 8.9506 5.8138 1.1311
${\varphi}_{\mathrm{ext}}^{B\prime \mathrm{C}\prime }$ 0.0417 2.3117 0.0412 2.2329
${e}_{\mathrm{ext}}^{B\prime \mathrm{C}\prime }$ 1.8292% 0.0754% 0.0747% 2.2793%
${e}_a^{B\prime \mathrm{C}\prime }$ 3.6585% 1.6017% 0.4854% 1.0941%
${\mathrm{GCI}}_{\mathrm{Fine}}^{B\prime \mathrm{C}\prime }$ 2.2865% 0.0942% 0.0933% 2.8492%
Table 9 Comparison of the CFD results of the present work and available experimental data
Fr Total resistance–weight ratio (R[T]/∆)
CAT mode SWATH mode
Present work Towing tank of UTM Error (%) Present work Towing tank of UTM Error (%)
0.31 0.022 0.023 −4.35 0.021 0.022 −4.54
0.48 0.059 0.058 1.72 0.072 0.070 2.86
0.56 0.077 0.085 −9.41 0.094 0.101 −6.93
0.68 0.073 0.084 −13.10 0.089 0.092 −3.26
Table 10 Vertical motion reduction
Fore fin angle (°) Aft fin angle (°) Heave motion reduction (%) Pitch motion reduction (%)
15 5 20.32 16.98
15 10 34.54 23.18
15 15 32.08 35.17
Table 11 Different cases for CFD simulation in regular head waves
Case Wave/vessel length $\left(\frac{\lambda_w}{L}\ Wave steepness $\left(\frac{h_w}{\lambda_w}\ Wave frequency (ω) (rad Dimensionless wave frequency $\left({\omega}^{\ast }=\omega \sqrt{L/
No. right)$ right)$ /s) g}\right)$
1 0.85 0.0436 5.6 2.72
2 1 0.0370 5.16 2.50
3 1.4 0.02s65 4.36 2.12
4 1.8 0.0206 3.84 1.87
5 2 0.0185 3.65 1.77
6 2.19 0.0169 3.49 1.70
7 2.4 0.0154 3.33 1.62
8 3 0.0123 2.98 1.44
9 4.5 0.0082 2.43 1.18
10 6.5 0.0057 2.02 0.98
11 9 0.0041 1.72 0.83
• Ali A, Maimun A, Ahmed YM (2014) CFD application in resistance analysis for advanced semi-SWATH vehicle. Appl Mech Mater 465-466: 44–49. https://doi.org/10.4028/www.scientific.net/AMM.465-466.44
Ali A, Maimun A, Ahmed YM, Rahimuddin R, Ghani MPA (2015) Experimental analysis on flow around fin assisted semi-SWATH. Jurnal Teknologi 74(5): 91–95. https://doi.org/10.11113/jt.v74.4647
Begovic E, Bertorello C, Mancini S (2015) Hydrodynamic performances of small size swath craft. Brodogradnja 66(4): 1–22
Begovic E, Bertorello C, Bove A, De Luca F (2019) Experimental study on hydrodynamic performance of SWATH vessels in calm water and in head waves. Appl Ocean Res 85: 88–106. https://doi.org/
Bonfiglio L, Perdikaris P, Vernengo G, de Medeiros JS, Karniadakis G (2018) Improving SWATH seakeeping performance using multi-fidelity Gaussian process and Bayesian optimization. J Ship Res 62
(4): 223–240. https://doi.org/10.5957/JOSR.11170069
Brizzolara S, Vernengo G (2011) Automatic computer driven optimization of innovative hull forms for marine vehicles. 10th WSEAS International Conference on Applied Computer and Applied
Computational Science, ACACOS'11, Venice, 273-278
Brizzolara S, Vernengo G, Bonfiglio L, Bruzzone D (2015) Comparative performance of optimum high-speed SWATH and semi-SWATH in calm water and in waves. Transact - Soc Naval Arch Mar Eng 123(M):
Bruzzone D (1994) Numerical evaluation of the steady free surface waves. CFD Workshop Ship Res Inst Tokyo Ⅰ: 126–134
Bruzzone D (2003) Application of a Rankine source method to the evaluation of motions of high-speed marine vehicles. Proceedings of the 8th International Marine Design Conference, Athens, Greece,
Ⅱ, 69-79
Campana EF, Peri D (2000) Hydrodynamic performance comparison between twin hulls. International Conference on Ship and Shipping Research, Venice, Italy, P2000-9 Proceedings
Celik IB, Ghia U, Roache PJ, Freitas CJ, Coleman H, Raad PE (2008) Procedure for estimation and reporting of uncertainty due to discretization in CFD applications. J Fluids Eng Trans ASME 130:
Chan HS (1993) Prediction of motion and wave loads of twin-hull ships. Mar Struct 6(1): 75–102. https://doi.org/10.1016/0951-8339(93)90010-Z
Cucinotta F, Guglielmino E, Sfravara F, Strasser C (2018) Numerical and experimental investigation of a planing Air Cavity Ship and its air layer evolution. Ocean Eng 152: 130–144. https://
De Luca F, Mancini S, Miranda S, Pensa C (2016) An extended verification and validation study of CFD simulations for planing hulls. J Ship Res 60(2): 101–118. https://doi.org/10.5957/
Dubrovsky V, Lyakhovitsky A (2001) Multi-hull ships. Backbone Publishing Co., Fair Lawn, USA, p 495
Dubrovsky V, Matveev K, Sutulo S (2007) Small water-plane area ships. Backbone Publishing Co., Hoboken, USA, 256
Ferziger JH, Perić M (2002) Computational methods for fluid dynamics. Springer, Berlin, Germany, pp 292–294. https://doi.org/10.1007/978-3-642-56026-2
Frisk D, Tegehall L (2015) Prediction of high-speed planing hull resistance and running attitude. A numerical study using computational fluid dynamics. Master of Science. Department of Shipping
and Marine Technology Chalmers University of Technology, Gothenburg, pp 1–51
Gregory DL (1973) Force and moment characteristics of six high-speed rudders for use on high-performance craft. Report 4150, United State Naval Academy, 1-8
Gupta SK, Schmidt TW (1986) Developments in swath technology. Nav Eng J 98(3): 171–188. https://doi.org/10.1111/j.1559-3584.1986.tb03428.x
Guttenplan A (2007) Hydrodynamic evaluation of high-speed semi-SWATH vessels. PhD thesis. Massachusetts Institute of Technology, Cambridge, USA, pp 58–60
ITTC (2011) Practical Guidelines for Ship CFD Applications - 7.5-03-02-03. 26th International Towing Tank Conference, Rio De Jenerio, Brazil
ITTC QM (2002) Uncertainty analysis in CFD verification and validation methodology and procedures- 7.5-03-01-01. Proceedings of the 23rd International Towing Tank Conference, Venice, Italy
Jupp M, Sime R, Dudson E (2014) Xss-a next generation windfarm support vessel. RINA Conference: Design & Operation of Wind Farm Support Vessels, London, 29-30
Kahramanoğlu E, Çakıcı F, Doğrul A (2020) Numerical prediction of the vertical responses of planing hulls in regular head waves. J Mar Sci Eng 8(6): 455. https://doi.org/10.3390/jmse8060455
Kallio JA (1976) Seaworthiness characteristics of a 2900 tons small waterplane area twin hull (SWATH). David W. Taylor Naval Ship Research and Development Center, Ship Performance Department,
SPD-620-03, Maryland
Pérez-Arribas F, Calderon-Sanchez J (2020) A parametric methodology for the preliminary design of SWATH hulls. Ocean Eng 197: 106823. https://doi.org/10.1016/j.oceaneng.2019
Salvesen N, Von Kerczek CH, Scragg CA (1985) Hydro-numeric design of swath ships. Transact - Soc Naval Arch Mar Eng: 325–346
Siemens PLM (2019) STAR-CCM+ User Guide Version 13.04. Siemens PLM. Software Inc, Munich, Germany
Stern F, Wilson R, Shao J (2006) Quantitative V & V of CFD simulations and certification of CFD codes. Int J Numer Methods Fluids 50(11): 1335–1355. https://doi.org/10.1002/fld.1090
Sun H, Jing F, Jiang Y, Zou J, Zhuang J, Ma W (2016) Motion prediction of catamaran with a semisubmersible bow in wave. Polish Maritime Research 23(1): 37–44. https://doi.org/10.1515/
Tezdogan T, Demirel YK, Kellett P, Khorasanchi M, Incecik A, Turan O (2015) Full-scale unsteady RANS CFD simulations of ship behaviour and performance in head seas due to slow steaming. Ocean Eng
97: 186–206. https://doi.org/10.1016/j.oceaneng.2015.01.011
Vernengo G, Brizzolara S (2017) Numerical investigation on the hydrodynamic performance of fast SWATHs with optimum canted struts arrangements. Appl Ocean Res 63: 76–89. https://doi.org/10.1016/
Vernengo G, Bruzzone D (2016) Resistance and seakeeping numerical performance analyses of a semi-small waterplane area twin hull at medium to high speeds. J Mar Sci Appl 15(1): 1–7. https://
Vernengo G, Apollonio CM, Bruzzone D, Bonfiglio L, Brizzolara S (2018) Hydrodynamics performance of high-speed multi-hulls in waves. Marit Transport Harvest Sea Resourc 1(1996): 493–500
Voxakis P (2012) Ship hull resistance calculations using CFD methods. PhD thesis. Massachusetts Institute of Technology, Cambridge, USA, pp 24–30
Wang C, Lin Y, Hu Z, Geng L, Li D (2016) Hydrodynamic analysis of a SWATH planing USV based on CFD. OCEANS 2016, Shanghai, 2-5. https://doi.org/10.1109/OCEANSAP.2016.7485460
Whicker LF, Fehlner LF (1958) Free-stream characteristics of a family of low aspect ratio all movable control surfaces for application to ship design. In: Report: AD-A014 272. David Taylor Model
Basin, Washington, D.C.
Wilson RV, Stern F, Coleman HW, Paterson EG (2001) Comprehensive approach to verification and validation of CFD simulations—part 2: application for rans simulation of a cargo/container ship. J
Fluids Eng Transact ASME 123(4): 803–810. https://doi.org/10.1115/1.1412236
Yaakob OB, Mekanikal FK (2006) Development of a semi-SWATH craft for Malaysian waters. University of technology, Malaysia
Yun L, Bliault A, Rong HZ (2018) High speed catamarans and multihulls: technology, performance, and applications. Springer 246-249. https://doi.org/10.1007/978-1-4939-7891-5 | {"url":"http://html.rhhz.net/jmsa/html/20210405.htm","timestamp":"2024-11-12T19:36:20Z","content_type":"text/html","content_length":"376407","record_id":"<urn:uuid:01f42198-f447-4ad5-a9ba-ec0e1d073881>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00009.warc.gz"} |
Today in Science History - Quickie Quiz
Who said: “I was going to record talking... the foil was put on; I then shouted 'Mary had a little lamb',... and the machine reproduced it perfectly.”
Category Index for Science Quotations
Category Index P
> Category: Presentation
Presentation Quotes (24 quotes)
[Florence Nightingale] was a great administrator, and to reach excellence here is impossible without being an ardent student of statistics. Florence Nightingale has been rightly termed the
“Passionate Statistician.” Her statistics were more than a study, they were indeed her religion. For her, Quetelet was the hero as scientist, and the presentation copy of his Physique Sociale is
annotated by her on every page. Florence Nightingale believed—and in all the actions of her life acted upon that belief—that the administrator could only be successful if he were guided by
statistical knowledge. The legislator—to say nothing of the politician—too often failed for want of this knowledge. Nay, she went further: she held that the universe—including human communities—was
evolving in accordance with a divine plan; that it was man's business to endeavour to understand this plan and guide his actions in sympathy with it. But to understand God's thoughts, she held we
must study statistics, for these are the measure of his purpose. Thus the study of statistics was for her a religious duty.
In Karl Pearson, The Life, Letters and Labours of Francis Galton (1924), Vol. 2, 414-5.
A myth is, of course, not a fairy story. It is the presentation of facts belonging to one category in the idioms appropriate to another. To explode a myth is accordingly not to deny the facts but to
re-allocate them.
In The Concept of Mind (1949), 8.
But it is precisely mathematics, and the pure science generally, from which the general educated public and independent students have been debarred, and into which they have only rarely attained more
than a very meagre insight. The reason of this is twofold. In the first place, the ascendant and consecutive character of mathematical knowledge renders its results absolutely insusceptible of
presentation to persons who are unacquainted with what has gone before, and so necessitates on the part of its devotees a thorough and patient exploration of the field from the very beginning, as
distinguished from those sciences which may, so to speak, be begun at the end, and which are consequently cultivated with the greatest zeal. The second reason is that, partly through the exigencies
of academic instruction, but mainly through the martinet traditions of antiquity and the influence of mediaeval logic-mongers, the great bulk of the elementary text-books of mathematics have
unconsciously assumed a very repellant form,—something similar to what is termed in the theory of protective mimicry in biology “the terrifying form.” And it is mainly to this formidableness and
touch-me-not character of exterior, concealing withal a harmless body, that the undue neglect of typical mathematical studies is to be attributed.
In Editor’s Preface to Augustus De Morgan and Thomas J. McCormack (ed.), Elementary Illustrations of the Differential and Integral Calculus (1899), v.
Generality of points of view and of methods, precision and elegance in presentation, have become, since Lagrange, the common property of all who would lay claim to the rank of scientific
mathematicians. And, even if this generality leads at times to abstruseness at the expense of intuition and applicability, so that general theorems are formulated which fail to apply to a single
special case, if furthermore precision at times degenerates into a studied brevity which makes it more difficult to read an article than it was to write it; if, finally, elegance of form has
well-nigh become in our day the criterion of the worth or worthlessness of a proposition,—yet are these conditions of the highest importance to a wholesome development, in that they keep the
scientific material within the limits which are necessary both intrinsically and extrinsically if mathematics is not to spend itself in trivialities or smother in profusion.
In Die Entwickdung der Mathematik in den letzten Jahrhunderten (1884), 14-15.
I believe scientists have a duty to share the excitement and pleasure of their work with the general public, and I enjoy the challenge of presenting difficult ideas in an understandable way.
From Autobiography in Wilhelm Odelberg (ed.), Les Prix Nobel en 1974/Nobel Lectures (1975)
In the sense that [truth] means the reality about a human being it is probably impossible for a biographer to achieve. In the sense that it means a reasonable presentation of all the available facts
it is more nearly possible, but even this limited goal is harder to reach than it appears to be. A biographer needs to be both humble and cautious.
Describing the difficulty of historical sources giving conflicting facts. From 'Getting at the Truth', The Saturday Review (19 Sep 1953), 36, No. 38, 11. Excerpted in Meta Riley Emberger and Marian
Ross Hall, Scientific Writing (1955), 399.
It is in scientific honesty that I endorse the presentation of alternative theories for the origin of the universe, life and man in the science classroom. It would be an error to overlook the
possibility that the universe was planned rather than happening by chance.
In letter to California State board of Education (14 Sep 1972).
It is not improbable that some of the presentations which come before the mind in sleep may even be causes of the actions cognate to each of them. For as when we are about to act [in waking hours],
or are engaged in any course of action, or have already performed certain actions, we often find ourselves concerned with these actions, or performing them, in a vivid dream.
In Mortimer Jerome Adler, Charles Lincoln Van Doren (eds.) Great Treasury of Western Thought: A Compendium of Important Statements on Man and His Institutions by the Great Thinkers in Western History
(1977), 352
Kirchhoff’s whole tendency, and its true counterpart, the form of his presentation, was different [from Maxwell’s “dramatic bulk”]. … He is characterized by the extreme precision of his hypotheses,
minute execution, a quiet rather than epic development with utmost rigor, never concealing a difficulty, always dispelling the faintest obscurity. … he resembled Beethoven, the thinker in tones. — He
who doubts that mathematical compositions can be beautiful, let him read his memoir on Absorption and Emission … or the chapter of his mechanics devoted to Hydrodynamics.
In Ceremonial Speech (15 Nov 1887) celebrating the 301st anniversary of the Karl-Franzens-University Graz. Published as Gustav Robert Kirchhoff: Festrede zur Feier des 301. Gründungstages der
Karl-Franzens-Universität zu Graz (1888), 30, as translated in Robert Édouard Moritz, Memorabilia Mathematica; Or, The Philomath’s Quotation-book (1914), 187. From the original German, “Kirchhoff …
seine ganze Richtung war eine andere, und ebenso auch deren treues Abbild, die Form seiner Darstellung. … Ihn charakterisirt die schärfste Präcisirung der Hypothesen, feine Durchfeilung, ruhige mehr
epische Fortentwicklung mit eiserner Consequenz ohne Verschweigung irgend einer Schwierigkeit, unter Aufhellung des leisesten Schattens. … er glich dem Denker in Tönen: Beethoven. – Wer in Zweifel
zieht, dass mathematische Werke künstlerisch schön sein können, der lese seine Abhandlung über Absorption und Emission oder den der Hydrodynamik gewidmeten Abschnitt seiner Mechanik.” The memoir
reference is Gesammelte Abhandlungen (1882), 571-598.
Let me tell you how at one time the famous mathematician Euclid became a physician. It was during a vacation, which I spent in Prague as I most always did, when I was attacked by an illness never
before experienced, which manifested itself in chilliness and painful weariness of the whole body. In order to ease my condition I took up Euclid’s Elements and read for the first time his doctrine
of ratio, which I found treated there in a manner entirely new to me. The ingenuity displayed in Euclid’s presentation filled me with such vivid pleasure, that forthwith I felt as well as ever.
Selbstbiographie (1875), 20. In Robert Édouard Moritz, Memorabilia Mathematica; Or, The Philomath's Quotation-book (1914), 146.
Mathematics, among all school subjects, is especially adapted to further clearness, definite brevity and precision in expression, although it offers no exercise in flights of rhetoric. This is due in
the first place to the logical rigour with which it develops thought, avoiding every departure from the shortest, most direct way, never allowing empty phrases to enter. Other subjects excel in the
development of expression in other respects: translation from foreign languages into the mother tongue gives exercise in finding the proper word for the given foreign word and gives knowledge of laws
of syntax, the study of poetry and prose furnish fit patterns for connected presentation and elegant form of expression, composition is to exercise the pupil in a like presentation of his own or
borrowed thoughtsand their development, the natural sciences teach description of natural objects, apparatus and processes, as well as the statement of laws on the grounds of immediate
sense-perception. But all these aids for exercise in the use of the mother tongue, each in its way valuable and indispensable, do not guarantee, in the same manner as mathematical training, the
exclusion of words whose concepts, if not entirely wanting, are not sufficiently clear. They do not furnish in the same measure that which the mathematician demands particularly as regards precision
of expression.
In Anleitung zum mathematischen Unterricht in höheren Schulen (1906), 17.
No part of Mathematics suffers more from the triviality of its initial presentation to beginners than the great subject of series. Two minor examples of series, namely arithmetic and geometric
series, are considered; these examples are important because they are the simplest examples of an important general theory. But the general ideas are never disclosed; and thus the examples, which
exemplify nothing, are reduced to silly trivialities.
In An Introduction to Mathematics (1911), 194.
Physical science enjoys the distinction of being the most fundamental of the experimental sciences, and its laws are obeyed universally, so far as is known, not merely by inanimate things, but also
by living organisms, in their minutest parts, as single individuals, and also as whole communities. It results from this that, however complicated a series of phenomena may be and however many other
sciences may enter into its complete presentation, the purely physical aspect, or the application of the known laws of matter and energy, can always be legitimately separated from the other aspects.
In Matter and Energy (1912), 9-10.
Science is a game—but a game with reality, a game with sharpened knives … If a man cuts a picture carefully into 1000 pieces, you solve the puzzle when you reassemble the pieces into a picture; in
the success or failure, both your intelligences compete. In the presentation of a scientific problem, the other player is the good Lord. He has not only set the problem but also has devised the rules
of the game—but they are not completely known, half of them are left for you to discover or to deduce. The experiment is the tempered blade which you wield with success against the spirits of
darkness—or which defeats you shamefully. The uncertainty is how many of the rules God himself has permanently ordained, and how many apparently are caused by your own mental inertia, while the
solution generally becomes possible only through freedom from its limitations.
Quoted in Walter Moore, Schrödinger: Life and Thought (1989), 348.
She has the sort of body you go to see in marble. She has golden hair. Quickly, deftly, she reaches with both hands behind her back and unclasps her top. Setting it on her lap, she swivels ninety
degrees to face the towboat square. Shoulders back, cheeks high, she holds her pose without retreat. In her ample presentation there is defiance of gravity. There is no angle of repose. She is a
siren and these are her songs.
The enthusiasm of Sylvester for his own work, which manifests itself here as always, indicates one of his characteristic qualities: a high degree of subjectivity in his productions and publications.
Sylvester was so fully possessed by the matter which for the time being engaged his attention, that it appeared to him and was designated by him as the summit of all that is important, remarkable and
full of future promise. It would excite his phantasy and power of imagination in even a greater measure than his power of reflection, so much so that he could never marshal the ability to master his
subject-matter, much less to present it in an orderly manner.
Considering that he was also somewhat of a poet, it will be easier to overlook the poetic flights which pervade his writing, often bombastic, sometimes furnishing apt illustrations; more damaging is
the complete lack of form and orderliness of his publications and their sketchlike character, … which must be accredited at least as much to lack of objectivity as to a superfluity of ideas. Again,
the text is permeated with associated emotional expressions, bizarre utterances and paradoxes and is everywhere accompanied by notes, which constitute an essential part of Sylvester’s method of
presentation, embodying relations, whether proximate or remote, which momentarily suggested themselves. These notes, full of inspiration and occasional flashes of genius, are the more stimulating
owing to their incompleteness. But none of his works manifest a desire to penetrate the subject from all sides and to allow it to mature; each mere surmise, conceptions which arose during
publication, immature thoughts and even errors were ushered into publicity at the moment of their inception, with utmost carelessness, and always with complete unfamiliarity of the literature of the
subject. Nowhere is there the least trace of self-criticism. No one can be expected to read the treatises entire, for in the form in which they are available they fail to give a clear view of the
matter under contemplation.
Sylvester’s was not a harmoniously gifted or well-balanced mind, but rather an instinctively active and creative mind, free from egotism. His reasoning moved in generalizations, was frequently
influenced by analysis and at times was guided even by mystical numerical relations. His reasoning consists less frequently of pure intelligible conclusions than of inductions, or rather conjectures
incited by individual observations and verifications. In this he was guided by an algebraic sense, developed through long occupation with processes of forms, and this led him luckily to general
fundamental truths which in some instances remain veiled. His lack of system is here offset by the advantage of freedom from purely mechanical logical activity.
The exponents of his essential characteristics are an intuitive talent and a faculty of invention to which we owe a series of ideas of lasting value and bearing the germs of fruitful methods. To no
one more fittingly than to Sylvester can be applied one of the mottos of the Philosophic Magazine:
“Admiratio generat quaestionem, quaestio investigationem investigatio inventionem.”
In Mathematische Annalen (1898), 50, 155-160. As translated in Robert Édouard Moritz, Memorabilia Mathematica; Or, The Philomath’s Quotation-book (1914), 176-178.
The goal of this presentation is to impress, rather than inform.
As quoted in obituary, A.L. Hodgkin, 'Some Recollections of William Rushton and his Contributions to Neurophysiology', Vision Research (1982), 22, 614. Hodgkin wrote that this quote was “confided to
me before a 12 minute talk describing our work.”
The Law of Inhibition. The strength of a reflex may be decreased through presentation of a second stimulus which has no other relation to the effector involved.
In The Behavior of Organisms: An Experimental Analysis (1938), 17.
The majority of mathematical truths now possessed by us presuppose the intellectual toil of many centuries. A mathematician, therefore, who wishes today to acquire a thorough understanding of modern
research in this department, must think over again in quickened tempo the mathematical labors of several centuries. This constant dependence of new truths on old ones stamps mathematics as a science
of uncommon exclusiveness and renders it generally impossible to lay open to uninitiated readers a speedy path to the apprehension of the higher mathematical truths. For this reason, too, the
theories and results of mathematics are rarely adapted for popular presentation … This same inaccessibility of mathematics, although it secures for it a lofty and aristocratic place among the
sciences, also renders it odious to those who have never learned it, and who dread the great labor involved in acquiring an understanding of the questions of modern mathematics. Neither in the
languages nor in the natural sciences are the investigations and results so closely interdependent as to make it impossible to acquaint the uninitiated student with single branches or with particular
results of these sciences, without causing him to go through a long course of preliminary study.
In Mathematical Essays and Recreations (1898), 32.
The presentation of mathematics where you start with definitions, for example, is simply wrong. Definitions aren't the places where things start. Mathematics starts with ideas and general concepts,
and then definitions are isolated from concepts. Definitions occur somewhere in the middle of a progression or the development of a mathematical concept. The same thing applies to theorems and other
icons of mathematical progress. They occur in the middle of a progression of how we explore the unknown.
Interview for website of the Mathematical Association of America.
There can be no doubt that science is in many ways the natural enemy of language. Language, either literary or colloquial, demands a rich store of living and vivid words—words that are
“thoughtpictures,” and appeal to the senses, and also embody our feelings about the objects they describe. But science cares nothing about emotion or vivid presentation; her ideal is a kind of
algebraic notation, to be used simply as an instrument of analysis; and for this she rightly prefers dry and abstract terms, taken from some dead language, and deprived of all life and personality.
In The English Language (1912), 124-125.
To present a scientific subject in an attractive and stimulating manner is an artistic task, similar to that of a novelist or even a dramatic writer. The same holds for writing textbooks.
My Life & My Views (1968), 48.
We shall not cease from exploration
And the end of all our exploring
Will be to arrive where we started
And know the place for the first time.
This was a favorite quotation of John Bahcall, who used it in his presentation at the Neutrino 2000 conference.
Poem, 'Little Gidding,' (1942). Collected in Four Quartets (1943), Pt. 5, 39.
You have … been told that science grows like an organism. You have been told that, if we today see further than our predecessors, it is only because we stand on their shoulders. But this [Nobel Prize
Presentation] is an occasion on which I should prefer to remember, not the giants upon whose shoulders we stood, but the friends with whom we stood arm in arm … colleagues in so much of my work.
From Nobel Banquet speech (10 Dec 1960).
In science it often happens that scientists say, 'You know that's a really good argument; my position is mistaken,' and then they would actually change their minds and you never hear that old view
from them again. They really do it. It doesn't happen as often as it should, because scientists are human and change is sometimes painful. But it happens every day. I cannot recall the last time
something like that happened in politics or religion. (1987) --
Carl Sagan
Sitewide search within all Today In Science History pages:
Visit our
Science and Scientist Quotations
index for more Science Quotes from archaeologists, biologists, chemists, geologists, inventors and inventions, mathematicians, physicists, pioneers in medicine, science events and technology.
Names index: |
Z |
Categories index: |
Z | | {"url":"https://todayinsci.com/QuotationsCategories/P_Cat/Presentation-Quotations.htm","timestamp":"2024-11-12T16:59:17Z","content_type":"text/html","content_length":"203267","record_id":"<urn:uuid:551d9dd5-47ec-48e7-8f45-a99ef6d8e9d1>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00694.warc.gz"} |
Vulkan GLSL/SPIR-V invalid imageSize result
Since driver 565.90 (the newest at the time of writing) (with an RTX 3080) imageSize in GLSL does not return the correct value. The driver before this, and all before that (for atleast multiple
years) did not have this bug.
The shader looks like this:
#version 460 core
layout(local_size_x = 8, local_size_y = 8, local_size_z = 1) in;
layout(set = 0, binding = 0) uniform samplerCube srcImage;
layout(set = 0, binding = 1) restrict writeonly uniform imageCube dstImage;
vec3 ComputeTextureCoords(uvec2 size) {
const vec2 st = (vec2(gl_GlobalInvocationID.xy) + 0.5f) / vec2(size);
const vec2 uv = 2.0f * vec2(st.x, 1.0f - st.y) - vec2(1.0f);
const vec3 coords[6] = vec3[](
vec3( 1.0f, uv.y, -uv.x),
vec3(-1.0f, uv.y, uv.x),
vec3( uv.x, 1.0f, -uv.y),
vec3( uv.x, -1.0f, uv.y),
vec3( uv.x, uv.y, 1.0f),
vec3(-uv.x, uv.y, -1.0f)
return normalize(coords[gl_GlobalInvocationID.z % 6u]);
void main() {
const uvec2 pixel = gl_GlobalInvocationID.xy;
const uvec2 dst_size = uvec2(imageSize(dstImage).xy);
if (all(lessThan(pixel, dst_size))) {
const vec3 uvw = ComputeTextureCoords(dst_size);
const float texel_size = 1.0f / float(dst_size.x);
uint taps = 0u;
vec4 sum = vec4(0.0f);
const int sample_size = 3;
// TODO: optimize
for (int i = -sample_size; i <= sample_size; i++) {
for (int j = -sample_size; j <= sample_size; j++) {
for (int k = -sample_size; k <= sample_size; k++) {
sum += texture(srcImage, uvw + (vec3(i, j, k) * vec3(texel_size)));
const vec4 filtered = sum / float(taps);
const uint current_face_index = gl_GlobalInvocationID.z % 6u;
imageStore(dstImage, ivec3(pixel, current_face_index), filtered);
Where srcImage and dstImage are both always the same image, but srcImage is always 1 mip level before the mip level of dstImage.
This line: const uvec2 dst_size = uvec2(imageSize(dstImage).xy); always used to get the correct result, but since driver 565.90, it always returns the size of the largest mip. Even when the
descriptor contains an VkImageView with a different mip level than 0.
So with a texture size of 1024, srcImage has a size of 1024 and dstImage a size of 512. However the imageSize(dstImage) call returns 1024.
I worked around this bug by using a push constant for the target mip size, which does work. I have not tested whether textureSize has the same bug.
When I debug in RenderDoc, by stepping through the code, it does say that imageSize returns 512 (for mip level 1) but that is probably some sort of emulation and not the real value the driver had put
I also tried this on a different PC (also RTX 3080), with a driver from last month (September 2024) which also confirmed that this is a new bug in the newly released driver. | {"url":"https://forums.developer.nvidia.com/t/vulkan-glsl-spir-v-invalid-imagesize-result/308886","timestamp":"2024-11-06T19:56:39Z","content_type":"text/html","content_length":"42511","record_id":"<urn:uuid:bed442e0-e2bd-4d0c-bfdc-4d669ed797f3>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00365.warc.gz"} |
A Brief Introduction to Fréchet Derivative
Fréchet derivative is a generalisation to the ordinary derivatives. Generally we are talking about Banach space, where $\mathbb{R}$ is a special case. This is to say, the space discussed is not even
required to be of finite dimension. We use $\mathbf{E}$ and $\mathbf{F}$ to denote Banach spaces.
A real-valued function $f(t)$ of a real variable, defined on some neighbourhood of $0$, is said to be of $o(t)$ if
And its derivative at some point $a$ is defined by
We also have this equivalent equation:
Now suppose $f:U \subset \mathbb{R}^n \to \mathbb{R}^m$ where $U$ is an open set. The function $f$ is differentiable at $x_0 \in U$ if satisfying the following conditions.
1. All partial derivatives of $f$, i.e. $\frac{\partial f_i}{\partial x_j}$ exists for all $i=1,\cdots,m$ and $j = 1,\cdots,n$ at $f$. (Which ensures that the Jacobian matrix exists and is
2. The Jacobian matrix $J(x_0)\in\mathbb{R}^{m\times n}$ satisfies
In fact the Jacobian matrix can be considered as the derivative of $f$ at $x_0$ although it’s a matrix in lieu of number. But we should treat a number as a matrix in the general case. In the
following definition of Fréchet derivative, you will see that we should treat something as linear maps.
Let $f:U\to\mathbf{F}$ be a function where $U$ is an open subset of $\mathbf{E}$. We say $f$ is Fréchet differentiable at $x \in U$ if there is a bounded and linear operator $\lambda:\mathbf{E} \to \
mathbf{F}$ such that
We say that $\lambda$ is the derivative of $f$ at $x$, which will be denoted by $Df(x)$ or $f’(x)$. Notice that $\lambda \in L(\mathbf{E},\mathbf{F})$. If $f$ is differentiable at every point of $f$,
then $f’$ is a map given by
The definition above doesn’t go too far from real functions defined on the real axis. Now we are assuming that both $\mathbf{E}$ and $\mathbf{F}$ are merely topological vector spaces, and still we
can get the definition of Fréchet derivative (generalised).
Let $\varphi$ be a mapping of a neighborhood of $0$ of $\mathbf{E}$ into $\mathbf{F}$. We say that $\varphi$ is tangent to $0$ if given a neighbourhood $W$ of $0$ in $\mathbf{F}$, there exists a
neighbourhood $V$ of $0$ in $\mathbf{E}$ such that
for some real function of $o(t)$. For example, if both $\mathbf{E}$ and $\mathbf{F}$ are normed (not have to be Banach), then we get a usual condition by
where $\lim_{\lVert x \rVert \to 0}\psi(x)=0$.
Still we assume that $\mathbf{E}$ and $\mathbf{F}$ are topological vector spaces. Let $f:U \to \mathbf{F}$ be a continuous map. We say that $f$ is differentiable at a point $x \in U$ if there exists
some $\lambda \in L(\mathbf{E},\mathbf{F})$ such that for small $y$ we have
where $\varphi$ is tangent to $0$. Notice that $\lambda$ is uniquely determined. This definition can be easily tested on the real line.
Basic concepts
You are certainly familiar with these properties of derivative, but we are redoing these in Banach spaces.
Chain rule
If $f: U \to V$ is differentiable at $x_0$, and $g:V \to W$ is differentiable at $f(x_0)$, then $g \circ f$ is differentiable at $x_0$, and
Proof. We are proving this in topological vector space. By definition, we already have some linear operator $\lambda$ and $\mu$ such that
where $\varphi$ and $\psi$ are tangent to $0$. Further, we got
To evaluate $g(f(x_0+y))$, notice that
It’s clear that $\mu\circ\varphi(y)+\psi(\lambda{y}+\varphi(y))$ is tangent to $0$, and $\mu\circ\lambda$ is the linear map we are looking for. That is,
Derivative of higher orders
From now on, we are dealing with Banach spaces. Let $U$ be an open subset of $\mathbf{E}$, and $f:U \to \mathbf{F}$ be differentiable at each point of $U$. If $f’$ is continuous, then we say that $f$
is of class $C^1$. The function of order $C^p$ where $p \geq 1$ is defined inductively. The $p$-th derivative $D^pf$ is defined as $D(D^{p-1}f)$ and is itself a map of $U$ into $L(\mathbf{E},L(\
mathbf{E},\cdots,L(\mathbf{E},\mathbf{F})\cdots)))$ which is isomorphic to $L^p(\mathbf{E},\mathbf{F})$. A map $f$ is said to be of class $C^p$ if its $kth$ derivative $D^kf$ exists for $1 \leq k \
leq p$, and is continuous. With the help of chain rule, and the fact that the composition of two continuous functions are continuous, we get
Let $U,V$ be open subsets of some Banach spaces. If $f:U \to V$ and $g: V \to \mathbf{F}$ are of class $C^p$, then so is $g \circ f$.
Open subsets of Banach spaces as a category
We in fact get a category $\{(U,f_U)\}$ where $U$ is the object as an open subset of some Banach space, and $f_U$ is the morphism as a map of class $C^p$ mapping $U$ into another open set. To verify
this, one only has to realize that the composition of two maps of class $C^p$ is still of class $C^p$ (as stated above).
We say that $f$ is of class $C^\infty$ if $f$ is of class $C^p$ for all integers $p \geq 1$. Meanwhile $C^0$ maps are the continuous maps.
An example
We are going to evaluate the Fréchet derivative of a nonlinear functional. It is the derivative of a functional mapping an infinite dimensional space into $\mathbb{R}$ (instead of $\mathbb{R}$ to $\
Consider the functional by
where the norm is defined by
For $u\in C[0,1]$, we are going to find an linear operator $\lambda$ such that
where $\varphi(\eta)$ is tangent to $0$.
Solution. By evaluating $\Gamma(u+\eta)$, we get
To prove that $\int_{0}^{1}\eta^2\sin{x}dx$ is the $\varphi(\eta)$ desired, notice that
Therefore we have
as desired. The Fréchet derivative of $\Gamma$ at $u$ is defined by
It’s hard to believe but, the derivative is not a number, nor a matrix, but a linear operator. But essentially, a real or complex number or matrix can be and should be treated as a linear operator in
the nature of things.
A Brief Introduction to Fréchet Derivative | {"url":"https://desvl.xyz/2020/07/31/frechet-derivative/","timestamp":"2024-11-13T12:39:02Z","content_type":"text/html","content_length":"29394","record_id":"<urn:uuid:78aa7aa9-d968-4377-8695-68caf7fc80ac>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00896.warc.gz"} |
seminars - Invitation to crystal bases for quantum symmetric pairs
2023-02-15 (Wed) AM 10:00 ~ 12:00
2023-02-17 (Fri) AM 10:00 ~ 11:00
The theory of crystal bases for quantum symmetric pairs, i.e., $\imath$crystal bases, which is still in progress, is an $\imath$quantum group (also known as ``quantum symmetric pair coideal
subalgebra'') counterpart of the theory of crystal bases.A goal of the theory of $\imath$crystal bases is to provide a way to recover much information about the structures of representations of $\
imath$quantum groups from its crystal limit, just like the theory of crystal bases for quantum groups.In these three hours of lecture, we first review basic theory of canonical bases and crystal
bases for quantum groups, and $\imath$canonical bases for $\imath$quantum groups. Then, we introduce a recent progress on the theory of $\imath$crystal bases of quasi-split locally finite type. As
mentioned above, the theory of $\imath$crystal bases of arbitrary type is not completed yet. Toward a next step, we discuss how the already known theory of $\imath$crystal bases could be generalized
to locally finite types. It would be a great pleasure for the speaker if the audience would be interested in and develop this ongoing project.
*This seminar will be held on Zoom. | {"url":"http://www.math.snu.ac.kr/board/index.php?mid=seminars&l=en&sort_index=Time&order_type=desc&page=88&document_srl=1033000","timestamp":"2024-11-14T05:22:01Z","content_type":"text/html","content_length":"46988","record_id":"<urn:uuid:0db008c7-cca5-43bd-945b-bfcf432b218d>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00216.warc.gz"} |
PalaeoMath: Part 17 - Shape Theory
17. Shape Theory
Written by Norm MacLeod - The Natural History Museum, London, UK (email: n.macleod@nhm.ac.uk). This article first appeared in the Nº 71 edition of Palaeontology Newsletter.
Now that we’ve come to grips with Procrustes superposition we’re in a position to understand what shapes really are and how they are distributed in a geometric space. From there the problems
associated with analyzing shapes with traditional, distance-based variables will be obvious, as will the manner in which shapes should be analyzed. This material all falls under the general heading
of ‘shape theory’ which is part of the mathematical field of topology. Even mathematicians find topology an arcane, complex and difficult subject. So, you’ll be relieved to learn we’re not going to
discuss in detail. But I will need to introduce you to some basic topological concepts in the context of the discussion.
Shape Theory
Let’s begin the discussion with a simple example of the standard approach to the description of shape. Consider the set of triangles shown in Figure 1.
The standard distance-based variables used to describe triangles are basal width and apex height.1 Note these distances make a clear distinction between the apex landmark and basal landmarks, with
the latter able to be further subdivided into right and left locations. Accordingly, these variables could be calculated for any set of three landmarks used to portray the relative positions of
structures on a fossil body. Indeed, this triangle measurement system assumes that each landmark can be defined uniquely within its set.
Once the landmarks have been located it is a trivial task to place each shape in its correct position relative to others in the space formed by these two variable axes. This is precisely the sort of
shape space we used in our discussions of regression and multivariate data analysis. But is a space so defined fully adequate to express similarities and differences among these objects?
The first hint that this might not be the case comes through inspection of the diagonal of triangle shapes from lower left to upper right. These are all equilateral triangles (= all sides of equal
length) and so have the same shape. The difference between the triangles located along this diagonal is one of size, not shape. Now consider the other diagonal of shapes, from upper left to lower
right. All three triangles along this diagonal differ in shape. But whereas the upper left and lower right forms are identical in size, both are smaller than the middle triangle. Thus, size and shape
are complexly confounded within this distance-based form space. The final complication, however, comes with the realization that this space is unable to describe triangles uniquely.
For the example shown in Figure 1 I chose to draw isosceles triangles in the space. I could have chosen any type of triangle. Figure 2 shows the same plot for right triangles that verge either to the
left or the right. Of course, right triangles still have a basal width and an apex height. We can use the same variables to describe them. But note that when we do both sets of right triangles plot
in exactly the same positions as the set of isosceles triangles in Figure 1.
This simple experiment suggests the geometric space formed by these two distance variables is anything but simple and straightforward to interpret for morphological data. Size and shape are
confounded in complex ways and individual positions within the space represent large (effectively infinite) families of possible shapes (in this case triangles), each of which differs from the others
in shape, size, or both. Such variables may be able to be used to test simple hypotheses involving shapes whose range of variation is limited (e.g., out example trilobite data). Even in these cases
though, the inherent geometric ambiguity of the space formed by such variables should always be kept in mind.
If all this complexity applies to the analysis of two distance variables, imagine the problems associated with both assessing and keeping track of the additional complexities that result from the
description of shapes using more than two distance variables! As we have already seen, patterns of variation in such data can be assessed using powerful techniques such as PCA and PCoord. But use of
these methods does not improve the power of distance variables themselves to describe shapes adequately. If anything, the correct geometric interpretation of multivariate ordination spaces based on
inherently ambiguous distance variables is even more complex than this simple two-variable example for any but the most well-behaved datasets.
What to do? Triangles are simple, two-dimensional figures. There must be a geometric space in which the shape of any triangle can be located uniquely. What we need to do is find this space, develop
some insight into what this space looks like, and develop tools that will allow us to use this space to make accurate comparisons between shapes. Let’s try to use the Procrustes tool we developed
last time on these triangle data to get our heads around what’s going on.
Recall that, under the Procrustes approach, shapes are those aspects of geometry left over after the factors of form difference attributable to (1) position, (2) scaling, and (3) rotation have all
been removed from data consisting of the coordinate locations of comparable landmarks. If we take the set of x,y coordinates for the 27 triangles shown in figures 1 and 2 and calculate their
Procrustes superposition on the sample mean shape, the resultant plot of superposed coordinate values looks like Figure 3.
The symmetry of this shape-coordinate plot may come as a surprise. Remember, Procrustes superposition tries to minimize the deviation between a target and a reference form (= the mean shape) at all
corresponding landmark locations across the entire form. Sometimes this results in odd-looking rotations of the datasets. But Procrustes superposition has the distinct advantage of minimising shape
differences globally.
Table 1. Eigenvalue results of triangle shape analysis
Component Eigenvalue Shape Variance (%) Cum. Shape Variance (%)
1 0.058 49.88 49.88
2 0.057 48.64 98.52
3 0.002 1.48 100.00
Once these data have been matched for shape variation we can obtain a sense of their linear ordination by performing a standard PCA analysis of the superposed coordinate values. Table 1 provides
information about the amount of shape variation that exists in this superposed shape-coordinate dataset. Despite the fact that six variables were used in the analysis, there are only three non-zero
eigenvalues. This happens because the Procrustes standardization for position, size, and rotation removes three components of shape variation from a dataset of landmark points described by two
Euclidean dimensions. With respect to the remaining axes PC-1 and PC-2 subsume subequal amounts of shape variation with a small remainder being represented on PC-3. Here it is important to emphasize
that the three-dimensional representation of the triangle shape space is not a mere by-product of this dataset. Three non-zero eigenvectors would be returned no matter how many triangles were
included in the dataset or what their shapes were, so long as they are represented by two-dimensional (x,y) coordinate data matched using the Procrustes method.
Since we have defined shape as that subset of the observed variation left over after standardization for position, size, and rotation, this means that the characteristic shape space for any form
represented by three landmarks is three-dimensional. By using appropriate software we can graphically represent the complete mathematical shape space of triangles. Of course, our small dataset of 27
isosceles and right-triangles is but a small subset of all possible triangles. Nevertheless, inspection of this small region of the overall triangle shape space (Fig. 4) yields important insights.
There’s much to discuss with relation to this graph. First, notice that, unlike the distance-based PC space shown in figures 1 and 2, the Procrustes shape space has a unique coordinate location for
all three sets of triangles. This means the Procrustes-referenced representation of shape relations is complete. In fact, it’s more complete it probably appears at first glance. Count the number of
points in each colour-coded triangle set. That’s odd! There are only seven points in each set. Yet, in figures 1 and 2 there are nine triangles. What happened to the extra two per set?
Recall that in each set the upward-trending diagonal (lower left - upper right) contained forms that differed in size, but not in shape. These forms plotted in different places in the distance-based
space because that (traditional) space confounds size and shape relations. Not so the Procrustes space. The fourth point in each series is a coordinate location where three shapes plot. This
represents an internal check on the fidelity of the Procrustes shape space. In the distance-based PCA space, shapes that were identical plotted in different locations. In the Procrustes PCA space,
these same shapes plot at the same location.
But does the overall picture of shape similarity relations shown in Figure 4 make sense? The triangles in figures 1 and 2 can be subdivided by the upward trending diagonal of identical shapes into
two groups. Triangles that plot below the diagonal are wide and low. Those plotting above the diagonal are tall and narrow. Within these subsets the shapes occupying the upper left and lower right
corners are more extreme than the two closer to the diagonal. Therefore, we should expect these extreme shapes to represent the ends of each sequence in Figure 4, the identical shapes along the
diagonal to represent the middle of each sequence, and the intermediate tall-narrow and short-wide shapes to be located in between, on either side of the group-specific mean shapes (arrows in Fig.
4). This is precisely the ordering of shapes seen in Figure 4.
In terms of inter-group relations, the tall, narrow end-member shapes in each sequence are grouped close together at the top of the diagram because it is possible to bring their landmark locations
into close alignment. This correspondence is impossible to achieve with the shorter, broader forms. Therefore, not only is the Procrustes-based shape space portraying shape similarities accurately,
it’s also portraying shape differences in a manner that agrees with what would be a taxonomist’s geometric intuition.
The advantages of using the Procrustes alignment as a basis for shape comparison should be clear by now. But there’s more. Perhaps the most intriguing aspect of the Procrustes shape space is the
curvature in the shape sequences that’s plainly visible when all three PCA axes are plotted together (Fig. 4, right). It’s almost as though the shapes are lying on the surface of some invisible,
underlying structure. As it turns out, that’s exactly the case.
We can better assess the shape of this invisible structure by increasing the sample size and diversity of triangular shapes and repeating the analysis. Figure 5 shows a selection of a dataset of 500
random triangles that were subjected to Procrustes alignment and PCA analysis. Figure 6 details the distribution of these 500 triangles in the space formed by the three PCA axes.
Because Procrustes shape data are expressed as deviations from a mean shape, the Procrustes PCA space is centred on the mean shape. Also, because dataset is composed of random triangle shapes, the
distribution of shapes is roughly circular about the mean shape. However, as you can see from the three-dimensional plot in Figure 6, all the triangle shapes are distributed on the surface of what
appears to be a hemispherical form. Regardless of the final geometry of this surface, it would appear Procrustes shape distributions exist in a curved mathematical space.
As it turns out, the full form space for triangles is a perfect sphere. Figure 7 is the canonical representation of this space which, for reasons that will become clear momentarily, we call the
pre-shape space.
Figure 7 is a two-dimensional map of the three-dimensional triangle pre-shape sphere. Like all spheres, the orientation of the grid system is arbitrary. In this diagram an equilateral triangle, apex
up, has been chosen as one pole and the same triangle, apex down as the other pole. The green circle is the sphere’s equator and the lower hemisphere has been folded up to form a ring around the
upper hemisphere. Triangles whose apices are located above the baseline are located in the upper hemisphere, those whose apices are located below the baseline in the lower hemisphere. In this
orientation the equator represents the set of colinear triangles in which all three vertices lie on the same line.
There are several important things to note about the pre-shape sphere. First, all possible triangles can be mapped to a unique coordinate location on the surface of the sphere. Another way of saying
this is that each coordinate location on the pre-shape sphere represents a unique configuration of the three landmarks that make up a triangle. Thus, this sphere’s surface represents a complete
representation of the geometry of triangular shape.
What about size? In this representation size is denoted by the radius of the pre-shape sphere. Physically large triangles plot on the surfaces of spheres with large radii, small triangles on spheres
with small radii. Recall that, by convention, Procrustes alignment rigidly expands or shrinks all shapes until they have unit centroid size. This operation projects the original shapes—that exist on
pre-shape spheres of varying sizes—to their corresponding positions on the unit-sized sphere, thus facilitating direct shape comparison.
What about rotation? Recall that our definition of shape specifically excludes configurations of points that are identical to each other, except for the fact that one has been rotated rigidly
relative to the other about their mutual centroid. The pre-shape space is considered ‘pre-shape’ because it places some forms that differ only by rotation at different coordinate locations on the
sphere’s surface. This can be appreciated most easily by noting that the equilateral triangles occupying the two polar positions in Figure 7 are identical except for a 180° rotational difference. In
fact, the symmetry between the lower and upper hemispheres of the pre-shape sphere arises because of 180° rotational differences (= reflection). However, by correcting for such rotational differences
between shapes, the lower hemisphere of the pre-shape space can be mapped onto or merged with the upper hemisphere (or vice versa) thereby achieving a fully realized shape space in which the effects
of position, scale, and reflection-rotation have all been removed. Geometrically this transforms the pre-shape sphere into a shape hemisphere. It is this shape hemisphere (also termed the shape
half-space) that is being depicted in Figure 6.
Actual shapes that can be characterized by any set of three landmarks represent a realized subset of all possible shapes that map to a particular region on the shape hemisphere. This region may be
large or small depending on the amount of shape variation present in the sample. Shapes may be distributed uniformly through the region or arranged in density clusters, again depending on the
character of shape variation present in the sample. All the intuitive conceptual conventions we’ve grown accustomed to when thinking about shapes and shape analysis, along with the concepts we use to
describe shape variation (e.g., shapes that are similar are ‘close to’ one another, those that are different are ‘distant from’ one another) still apply. But now we understand why in a precise
mathematical sense. As a result, this knowledge of what size and shape really are can be used to inform our choice of data-analysis methods and our interpretations of the results of various
mathematical operations.
Best of all, these conventions don’t just apply to shapes represented by three landmarks. It’s convenient to work with the triangle shape space because all triangular shapes can be represented in
three uncorrelated dimensions we can easily ‘see’ in our mind’s eye and represent on a flat piece of paper or on a computer screen using various graphic conventions. But all shapes that can be
described by sets of landmarks have their own shape spaces that behave in precisely the same way.
Morphometricians and topologists call the mathematical surfaces on which shapes reside manifolds, which are mathematical spaces that, on a small enough scale resemble a Euclidean space of a certain
dimension. The triangle pre-shape space and the shape hemisphere are both examples of two-dimensional manifolds. The problem with the more complicated manifolds on which shapes defined by more than
three landmarks reside is that most of us find it difficult to think in more than three dimensions and our graphic tools for depicting higher dimensional spaces are very primitive. Nevertheless, we
can use the triangle shape manifold to gain insight in to the practicalities and complications of truly geometric shape analysis.
At this point I need to make a point about why shape data are different from other sets of data so as not to give you the impression that you can use Procrustes PCA to analyse anything and
everything. Recall that PCA (and PCoord, and FA, and MDS) is a generalized data-analysis procedure. It (and they) can be used to analyse data of any sort. The reason why standard distance-based data
are not ideally suited for shape data is that, in addition to relations among variables (e.g., covariance, correlation), shape data have an inherent geometry that needs to be respected at the design
and computational levels of the analysis. Distance data are simply magnitudes. By themselves they preserve no aspect of the fundamental geometry of the shape. This places constraints on the analysis
and interpretation of shape data that simply doesn’t exist for other, more generalized data types.
In a sense standardizing generalized data corrects for the same sorts of factors as the Procrustes standardization for position and size. In some cases it makes sense to standardize data. In others
it doesn’t make sense to do so. It almost always makes sense to undertake such standardizations for shape data. But there is no routinely invoked equivalent for rotation to a common reference in
non-shape data, The bottom line is, the inherent geometry of shape data means they are different in ways that are not handled well by distance-based variables, but that can be handled by the same
sorts of data-analysis procedures we have used throughout our discussion of linear regression and multivariate analysis, provided these shapes are represented by landmarks whose positions relative to
one another have been rigidly matched using Procrustes superposition (or an equivalent matching technique).
Let’s end this first exploration of shape theory by discussing a few of the complications that follow from shapes existing mathematically on a curved manifold. If the shape space is curved this means
that, strictly speaking, it is inappropriate to use tools of linear algebra (e.g., covariances, eigenanalysis) to explore and summarize relations among shapes. The basic problem is illustrated in
Figure 8.
Since hypotheses about shapes typically turn on the issue of shape similarity, and since shape similarity is quantified by the distance between two shapes or between a shape and the reference shape
in the context of the shape space, it is important to calculate the distances, between shapes accurately. The distances we’re interested in are the distances of the shortest curves between two
configurations’ coordinate positions along the shape manifold. However, the easiest distances to calculate are the linear distances between points on the manifold. The full, curved distance is termed
the Procrustes distance (ρ in Fig. 8) and the linear distance the partial Procrustes distance (Dρ in Fig. 8). As you might imagine, the equations used for calculating the Procrustes distance are
formidable, especially when the shape space is high-dimensional. However, we’ve all seen this problem before and are aware of readily available solution.
An important hint at the solution is provided in Figure 7. This is a map of the three-dimensional triangle pre-shape space that’s been flattened out to occupy two dimensions. Note that the method
employed to flatten the three-dimensional space has left the points in the lower hemisphere wildly distorted, but points in the upper hemisphere at positions close to their true three-dimensional
I’ve accentuated the difference between ρ and Dρ in Figure 8 by placing the green point (A) a good distance from the reference shape (red point). If, in your mind’s eye, you move the green point
along the curve toward the red point a difference between ρ and Dρ remains, but becomes far less marked. Therefore, if our sample of shapes are more-or-less similar to start with, substituting Dρ
for ρ should not introduce a large error into estimates, plots, and summaries of shape similarity.
Here it is appropriate to note that landmark datasets are often biased toward overall shape similarity insofar as it is comparatively rare to find sets of organisms with radically different
morphologies that can be represented adequately by sets of landmarks. The simple fact that the same set of landmarks must be able to be found on all specimens in the sample goes a long way toward
ensuring the the range of shape differences included in any landmark-based analysis is relatively small. For those who like to check assumptions, tests are available to determine how much distortion
is likely to be present in Procrustes-based shape analysis. So, we can simplify our problem by taking advantage of linear approaches to data analysis, providing our sample doesn’t encompass too much
shape variation.
This having been said, from a practical point-of view the problem of distortion due to inappropriate selection of tangent-plane orientation is usually far more important than distortion due to the
range of shape variation present in a sample. In previous discussions you may have wondered why it’s standard for Procrustes superposition to express shape variation as deviation from the mean shape.
After all, we don’t usually express distance-based data as a deviation from the mean distance. Moreover, there are other reference forms that could conceivably be used as a reference for a set of
shape data (e.g., either the juvenile or mature adult forms in an ontogenetic study, a putative ancestral form in an evolutionary study, a holotypic form in a taxonomic study). What, if anything, is
so darn special about the sample mean shape?
The answer to this question has to do not with some stylistic chauvinism among geometric morphometricians, but with the fundamental geometry of the Procrustes shape space. If shape variation in a
sample is moderate, it is possible to project shape configuration locations from their positions on the surface of the shape manifold to a linear plane where the well-developed, traditional, and
familiar tools of linear algebra can be used to quantify, summarize, represent, and test shape distributions. But there are an infinite number of possible planes that could be used for this purpose.
Which, from among this infinite set of tangent planes, is the best choice?
Figure 9 shows two possible tangent plane choices for a dataset composed of two groups, green and blue. In this hypothetical example the shapes exhibited by the green and blue groups are quite
distinct. The orientations of the two tangent planes are given by locating tangent points on the Procrustes shape hemisphere. Since each point on that surface corresponds to a configuration of
landmark points, this is tantamount to specifying a reference shape. The red dot represents the position of the mean shape for the pooled sample. The yellow dot represents an alternative and
arbitrary choice of reference shape. There are several ways of performing the projection, which we’ll discuss in a moment. For now however, let’s assume we’re going to perform a simple, orthogonal or
major axis projection to the tangent plane.
Once we’ve got a clear picture of what the choice of tangent planes entails for the analysis, the correct choice is equally clear. Selecting a point at the periphery of a shape distribution (the
yellow point in Fig. 9) guarantees a relatively high level of distortion in the resultant shape ordination due to the curvature of the Procrustes shape space. The effect has been exaggerated in
Figure 9 by placing the yellow dot well outside the limits of the observed sample’s shape variation. Nevertheless, and as I hope you can see from the diagram, the distortion will be present for any
reference shape choice drawn from the periphery (or beyond) of the shape distribution.
Contrast this with the situation that results from selecting the mean shape (= red dot) as the basis for tangent-plane orientation. This is a position that is guaranteed to orient the tangent plane
in a position that minimizes curved-space distortion for the sample. Distortion is present in projections to a tangent plane defined by the mean shape and will be greater for those points at the
periphery (as opposed to the centre) of the shape distribution. Some degree of distortion is inevitable whenever a distribution that exists in a high-dimensional space is represented in spaces of
lower dimensionality. But as you can see from Figure 9, the amount of distortion is much reduced. For this hypothetical dataset the difference is that of being able to recognize and interpret the
shape difference that characterize these groups or not.
The last shape-space issue we’ll discuss is the strategies available for making projections of points on the surface of the shape hemisphere to the tangent plane. Alternative approaches are
summarized in Figure 10.
For completeness I’ve added a second potential shape manifold to this diagram, shown in Figure 10 as the dashed circle inscribed between the origin and reference shape in the Procrustes shape
hemisphere. This is the Kendall shape space (or shape manifold), which is formed by relaxing the constraint that all shapes should be adjusted to unit centroid size. As you can see on the diagram,
whereas the Procrustes distance (ρ) can be estimated by partial Procrustes distance (Dρ), this is not the shortest distance between the reference shape and a configuration whose form is identical to
that of the comparison shape. This shortest distance is represented by Df in Figure 10, which is termed the full Procrustes distance. The difference here is that the blue point (B) does not lie on
the unit Procrustes shape manifold. Instead, it resides at a position along the same trajectory from the shape manifold’s origin, but internal to its surface. This is a position in which the
configuration’s shape is the same, but the size is slightly smaller.
Application of this ‘relaxed size’ convention produces an alternative shape space that provides a better overall fit of configurations to the reference, but does so at the cost of continually varying
the configuration’s size factor in a highly nonlinear manner. Once again, and as I hope you can appreciate from the diagram, for distributions of shapes that are all fairly similar—the typical case
in systematics in general—ρ, Dρ, and Df all converge on similar values. Accordingly, in such situations it’s usually acceptable to employ the more easily calculated partial Procrustes distance in
representing shape ordinations.
Regardless of this complication over which space is most appropriate to use as a basis for shape comparison, there are two primary ways of projecting points from the shape space(s) to a tangent
plane. The stereographic method projects shape configurations from the origin of the Procrustes shape hemisphere (and/or the polar position of the Kendall shape space) through the positions of the
geometrically homologous configurations on the surfaces of these two shape spaces to the tangent plane. In Figure 10 this projection is used to place point A-B.
Note that the stereographic method makes no distinction between the Procrustes shape manifold and Kendall shape manifold. Both ways of representing shape project to identical positions on a tangent
plane. This is a distinct advantage. The disadvantage of this approach is that the apparent distance between the reference and the projected point is always an overestimate of the true Procrustes
distance (ρ), especially for configurations lying at some distance from the reference shape. Indeed, for forms that lie along the equator of the Procrustes shape manifold (= at the pole of the
Kendall shape space) no projection is possible as the distance is infinite. However, this is a rarely encountered situation. In the overwhelming majority of cases involving biological shape analysis
the estimate is accurate, through the systematic bias to overestimation is always present.
Alternatively projection to the tangent plane may be undertaken in an orthogonal (= major axis) mode using the orientation of the tangent plane as the basis for projection. In Figure 10 orthogonal
projections are used to place points A and B on the tangent plane. For this projection strategy the advantages and disadvantages are reversed from those of the stereographic mode. Here, it makes a
difference as to whether you choose to match shapes using the Procrustes or Kendall shape spaces. But in either case the projection underestimates the partial Procrustes distance (Dρ) or the full
Procrustes distance (Df) respectively, both of which also underestimate the Procrustes distance. As with the stereographic projection, the magnitude of the distortion increases for those
configurations that differ markedly from the reference shape. But in no case does the projection lead to an infinite result. Overall, orthogonal projections from the Procrustes shape manifold produce
more accurate estimates of the Procrustes and partial Procrustes distances. Unsurprisingly, orthogonal projections from the Kendall shape manifold produce less accurate estimates of the Procrustes
and partial Procrustes distances, but better estimates of the full Procrustes distance.
If you’ve made it this far congratulations (and thank you). It might have seemed like a long, hard slog that had little to do with palaeontology per se. Please be assured that my purpose in this
essay—and in this column—is not to turn you into mathematicians. Rather, it’s to explain how the tools of mathematics can make us all better palaeontologists and, if truth be told, to lower the level
of intimidation most palaeontologists feel toward mathematics. You don’t have to understand the intricacies of non-linear algebra to be able to design and execute a Procrustes shape analysis
intelligently, provided you have a firm grasp of the fundamentals. Most importantly though, as Procrustes analysis is arguably the most powerful tool in the quantitative form-analysis kit, and since
the basic data of all palaeontology constitutes form, the ability to conduct such analyses should, in my view, be part of every palaeontologist’s training. Besides, once you’ve got a proper guide.
it’s not all that hard to understand.
As for software, I really haven’t covered anything in this column that is new in terms of procedures that require access to new software. Most of the algorithms and calculations have been described
in previous columns. The triangle examples are included as part of the Palaeo-Math 101-2 spreadsheet so you can see exactly how the figures I’ve used to illustrate this column were obtained. A full
analysis of the raw data can also be performed using Jim Rohlf’s tpsRelw program, which is downloadable from his SUNY morphometrics web site (http://life.bio.sunysb.edu/morph). I’ve written several
Mathematica routines that were used to perform all the analyses presented herein. These are available free on request. The only procedures that haven’t been covered in algorithmic detail are the
routines used for stereoscopic and orthogonal projection to a tangent plane. I need to develop a few additional concepts before I explain how these projections can be accomplished. Accordingly, they
will be the subject of a future column.
Finally, references. There really aren’t that many descriptions of this material that have been written to date for non-mathematical audiences. A full mathematical treatment is provided by Mardia and
Dryden (1989) and Dryden and Mardia (1998). The canonical conceptual treatment of the concepts involved are covered by Bookstein (1990). A useful, but somewhat overly complex introductory version of
this material can be found Zelditch et al. (2004). Finally, a short, but useful discussion is also included in the help section of Rohlf’s tpsRelw program. | {"url":"https://palass.org/publications/newsletter/palaeomath-101/palaeomath-part-17-shape-theory","timestamp":"2024-11-01T22:00:44Z","content_type":"text/html","content_length":"80188","record_id":"<urn:uuid:f153c146-545c-48cc-ab1c-d00cc16cad44>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00846.warc.gz"} |
How to generate random integers by group in julia?
You can use the rand function in Julia to generate random integers by group. Here is an example code snippet to generate random integers by group:
1 using Random
3 # Define the groups
4 groups = [1, 2, 3]
6 # Generate random integers by group
7 n = 5 # Number of random integers per group
8 random_integers = Dict{Int, Vector{Int}}()
9 for group in groups
10 random_integers[group] = rand(1:10, n)
11 end
13 # Print the random integers by group
14 for (group, integers) in random_integers
15 println("Group $group: $integers")
16 end
In this code snippet, we first define the groups as an array of integers (groups). We then generate random integers by group using a for loop, where we generate n random integers in the range 1 to 10
for each group. Finally, we print the generated random integers by group using another for loop. | {"url":"https://devhubby.com/thread/how-to-generate-random-integers-by-group-in-julia","timestamp":"2024-11-05T19:28:47Z","content_type":"text/html","content_length":"116042","record_id":"<urn:uuid:dc24f925-5fc2-423f-827b-5d31b00e862e>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00240.warc.gz"} |
Cite as
Anael Grandjean, Benjamin Hellouin de Menibus, and Pascal Vanier. Aperiodic Points in Z²-subshifts. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018). Leibniz
International Proceedings in Informatics (LIPIcs), Volume 107, pp. 128:1-128:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)
Copy BibTex To Clipboard
author = {Grandjean, Anael and Hellouin de Menibus, Benjamin and Vanier, Pascal},
title = {{Aperiodic Points in Z²-subshifts}},
booktitle = {45th International Colloquium on Automata, Languages, and Programming (ICALP 2018)},
pages = {128:1--128:13},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-076-7},
ISSN = {1868-8969},
year = {2018},
volume = {107},
editor = {Chatzigiannakis, Ioannis and Kaklamanis, Christos and Marx, D\'{a}niel and Sannella, Donald},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ICALP.2018.128},
URN = {urn:nbn:de:0030-drops-91323},
doi = {10.4230/LIPIcs.ICALP.2018.128},
annote = {Keywords: Subshifts of finite type, Wang tiles, periodicity, aperiodicity, computability, tilings} | {"url":"https://drops.dagstuhl.de/search/documents?author=Grandjean,%20Anael","timestamp":"2024-11-01T19:42:53Z","content_type":"text/html","content_length":"82049","record_id":"<urn:uuid:3fdf61cf-46f9-45c9-ab1c-490df69ac18e>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00048.warc.gz"} |
Electrostatics Vs Magnetostatics Electrostatics Magnetostatics
Fields, Units, Magnetostatics European School on Magnetism Laurent Ranno (
[email protected]
) Institut N´eelCNRS-Universit´eGrenoble Alpes 10 octobre 2017 European School on Magnetism Laurent Ranno (
[email protected]
)Fields, Units, Magnetostatics Motivation Magnetism is around us and magnetic materials are widely used Magnet Attraction (coins, fridge) Contactless Force (hand) Repulsive Force : Levitation
Magnetic Energy - Mechanical Energy (Magnetic Gun) Magnetic Energy - Electrical Energy (Induction) Magnetic Liquids A device full of magnetic materials : the Hard Disk drive European School on
Magnetism Laurent Ranno (
[email protected]
)Fields, Units, Magnetostatics reminders Flat Disk Rotary Motor Write Head Voice Coil Linear Motor Read Head Discrete Components : Transformer Filter Inductor European School on Magnetism Laurent
Ranno (
[email protected]
)Fields, Units, Magnetostatics Magnetostatics How to describe Magnetic Matter ? How Magnetic Materials impact field maps, forces ? How to model them ? Here macroscopic, continous model Next lectures :
Atomic magnetism, microscopic details (exchange mechanisms, spin-orbit, crystal field ...) European School on Magnetism Laurent Ranno (
[email protected]
)Fields, Units, Magnetostatics Magnetostatics w/o magnets : Reminder Up to 1820, magnetism and electricity were two subjects not experimentally connected H.C. Oersted experiment (1820 - Copenhagen)
European School on Magnetism Laurent Ranno (
[email protected]
)Fields, Units, Magnetostatics Magnetostatics induction field B Looking for a mathematical expression Fields and forces created by an electrical circuit (C1, I) Elementary dB~ induction field created
at M ~ ~ µ0I dl^u~ Biot and Savart law (1820) dB(M) = 4πr 2 European School on Magnetism Laurent Ranno (
[email protected]
)Fields, Units, Magnetostatics Magnetostatics : Vocabulary µ I dl~ ^ u~ dB~ (M) = 0 4πr 2 B~ is the magnetic induction field ~ 1 1 B is a long-range vector field ( r 2 becomes r 3 for a closed | {"url":"https://docslib.org/doc/77961/electrostatics-vs-magnetostatics-electrostatics-magnetostatics","timestamp":"2024-11-14T07:53:55Z","content_type":"text/html","content_length":"58689","record_id":"<urn:uuid:06ebd4ad-170e-4a5c-a3a6-c3550d09e820>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00700.warc.gz"} |
If function can be used in the formula
That's it, that's the most wanted feature. We get it. Conditions and logical operators are required for many calculators. And now you can build them. If function is ready!
Let's go right to the meat. If function works exactly like you are used to from Spreadsheet programs like MS Excel. Here is definition:
if(CONDITION, TRUE, ELSE)
The if function needs a CONDITION. For example 3 > 4 (3 is bigger than 4). If the condition is true, then the if function returns whatever you add to the TRUE part of the function. If the condition
is false, then the if function returns whatever you add to the ELSE part of the function. Here is a simple example:
if(3 > 4, 10, 20)
This example says: If 3 is bigger than 4, return 10 else return 20. And because 3 is less than 4, this example returns the number in ELSE part which is 20. But this example is quite dump, because it
will always return the same thing. We could easily just type directly 20 instead of this function. The if function gets much more interesting when you use field variables. Here is the example with a
if(F123 > 4, 10, 20)
Now, we cannot easily say what the function returns, right? It depends on the value in the field with id F123. So if the value of F123 is bigger than 4, the function will return 10. If the value is
less than 4, it will return 20. You can do even more advanced stuff with if function:
if(F123 > 4 and F124 != F125, F127, 0) * 3 / F126
That is quite complicated, but if your calculator requires it, you can do it. Here is what it says: If the value of F123 is bigger than 4 and value of F124 is not equal to value of F125, return value
of F127 else return 0. And whatever this if function returns, multiply it by 3 and divide it by value of F126. Do you think that's the most complicated example we can think of? It still can get
worse. You can combine multiple if or any other available functions into one another:
if(F123 > if(F128 == 10, F128, -1) and F124 != F125, F127, 0) * 3 / sin(F126)
I'll rather let you absorb that one alone. | {"url":"https://www.calculoid.com/blog/50-if-function-can-be-used-in-the-formula","timestamp":"2024-11-09T03:45:06Z","content_type":"application/xhtml+xml","content_length":"39265","record_id":"<urn:uuid:dedb3e0e-f45f-482d-89dd-f7b7244514f1>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00844.warc.gz"} |
Graham Farmelo on Paul Dirac and his concept of Mathematical Beauty
Adjunct Professor of Physics at Northeastern University in Boston, Graham Farmelo, on Paul Dirac and the Religion of Mathematical Beauty. Apart from Einstein, Paul Dirac was probably the greatest
theoretical physicist of the 20th century. Dirac, co-inventor of quantum mechanics, is now best known for conceiving of anti-matter and also for his deeply eccentric behavior. For him, the most
important attribute of a fundamental theory was its mathematical beauty, an idea that he said was "almost a religion" to him.
[youtube http://www.youtube.com/watch?v=YfYon2WdR40&w=853&h=480] | {"url":"https://blog.sghatpande.eu/2013-01-13-graham-farmelo-on-paul-dirac-and-his-concept-of-mathematical-beauty/","timestamp":"2024-11-02T23:24:01Z","content_type":"text/html","content_length":"10593","record_id":"<urn:uuid:3d73cd9f-f2f5-4d1c-a2ce-7dd3ebe4133f>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00215.warc.gz"} |
Identifying Graphs of Quadratic Equations in Vertex Form
Question Video: Identifying Graphs of Quadratic Equations in Vertex Form Mathematics • Second Year of Secondary School
Which of the following graphs represents the equation π ¦ = β (π ₯ β 1)Β²? [A] Graph A [B] Graph B [C] Graph C [D] Graph D [E] Graph E
Video Transcript
Which of the following graphs represents the equation π ¦ is equal to negative π ₯ minus one all squared?
In this question, weβ re given five graphs, and we need to determine which of these five graphs represents the equation π ¦ is equal to negative one times π ₯ minus one all squared. And thereβ s
a few different ways we could go about this. For example, we could try eliminating options by determining points on the curve. However, this method will only work if weβ re given the options. So
instead, weβ re just going to sketch the curve π ¦ is equal to negative one times π ₯ minus one all squared.
And to help us sketch this curve, we need to notice something interesting. The equation weβ re given is in vertex form. Thatβ s the form π ¦ is equal to π times π ₯ minus β all squared plus
π , where π , β , and π are real numbers and π is not zero. And we can recall that the values of π , β , and π give us useful information about our curve. First, the coordinates of
the vertex of our parabola will be β , π . This is also sometimes called the turning point.
Letβ s determine the values of π , β , and π for the equation given to us in the question. First, the coefficient of our parentheses is negative one. So, π is negative one. Next, weβ re
subtracting one from π ₯. So, our value of β is one. Finally, we have no constant at the end of our expression. So, the value of π is zero. Therefore, if we substitute the value of β is one
and π is zero, we get the vertex will have coordinates one, zero. And if we want, we can add the coordinates of the vertices to all four of our options to eliminate options.
We see in option (A) the vertex is not at one, zero. In option (B), the vertex is not at one, zero. And in option (D), the vertex is not at one, zero. So, these three options cannot be graphs of the
equation given to us in the question. However, itβ s not necessary to use elimination to answer this question. So, letβ s continue sketching our graph. Next, we recall the value of π gives us
information about the shape of our parabola. In particular, if π is positive, our parabola opens upwards, and if π is negative, our parabola opens downwards. In our case, our value of π is
negative one. And we can see option (C) opens upwards. So, option (C) cannot be correct. And this is enough to answer our question by elimination; only option (E) can represent the graph of this
However, for due diligence, letβ s finish the sketch of our curve. Weβ ve shown the coordinates of the vertex of this parabola is one, zero And itβ s a parabola opening downwards. However, thereβ
s an infinite number of parabolas which open downwards with vertex at coordinates one, zero. So, we should also find the coordinates of one extra point on our curve. Weβ ll find the coordinates of
the π ¦-intercept, which we can find by substituting π ₯ is equal to zero into the equation of our curve. We get π ¦ is equal to negative one times zero minus one all squared, which we can
evaluate is negative one. So, the π ¦-intercept of this curve is negative one, which we can see also agrees with option (E).
Therefore, we were able to show of the five given options only option (E) represents the equation π ¦ is equal to negative one times π ₯ minus one all squared. | {"url":"https://www.nagwa.com/en/videos/948104721903/","timestamp":"2024-11-06T07:49:44Z","content_type":"text/html","content_length":"253344","record_id":"<urn:uuid:b04590b4-58db-4e80-a044-23d205272ef0>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00835.warc.gz"} |
Returns the Tutte polynomial of G
This function computes the Tutte polynomial via an iterative version of the deletion-contraction algorithm.
The Tutte polynomial T_G(x, y) is a fundamental graph polynomial invariant in two variables. It encodes a wide array of information related to the edge-connectivity of a graph; “Many problems
about graphs can be reduced to problems of finding and evaluating the Tutte polynomial at certain values” [1]. In fact, every deletion-contraction-expressible feature of a graph is a
specialization of the Tutte polynomial [2] (see Notes for examples).
There are several equivalent definitions; here are three:
Def 1 (rank-nullity expansion): For G an undirected graph, n(G) the number of vertices of G, E the edge set of G, V the vertex set of G, and c(A) the number of connected components of the graph
with vertex set V and edge set A [3]:
\[T_G(x, y) = \sum_{A \in E} (x-1)^{c(A) - c(E)} (y-1)^{c(A) + |A| - n(G)}\]
Def 2 (spanning tree expansion): Let G be an undirected graph, T a spanning tree of G, and E the edge set of G. Let E have an arbitrary strict linear order L. Let B_e be the unique minimal
nonempty edge cut of \(E \setminus T \cup {e}\). An edge e is internally active with respect to T and L if e is the least edge in B_e according to the linear order L. The internal activity of T
(denoted i(T)) is the number of edges in \(E \setminus T\) that are internally active with respect to T and L. Let P_e be the unique path in \(T \cup {e}\) whose source and target vertex are the
same. An edge e is externally active with respect to T and L if e is the least edge in P_e according to the linear order L. The external activity of T (denoted e(T)) is the number of edges in \(E
\setminus T\) that are externally active with respect to T and L. Then [4] [5]:
\[T_G(x, y) = \sum_{T \text{ a spanning tree of } G} x^{i(T)} y^{e(T)}\]
Def 3 (deletion-contraction recurrence): For G an undirected graph, G-e the graph obtained from G by deleting edge e, G/e the graph obtained from G by contracting edge e, k(G) the number of
cut-edges of G, and l(G) the number of self-loops of G:
\[\begin{split}T_G(x, y) = \begin{cases} x^{k(G)} y^{l(G)}, & \text{if all edges are cut-edges or self-loops} \\ T_{G-e}(x, y) + T_{G/e}(x, y), & \text{otherwise, for an arbitrary edge $e$ not a
cut-edge or loop} \end{cases}\end{split}\]
GNetworkX graph
instance of sympy.core.add.Add
A Sympy expression representing the Tutte polynomial for G.
Some specializations of the Tutte polynomial:
□ T_G(1, 1) counts the number of spanning trees of G
□ T_G(1, 2) counts the number of connected spanning subgraphs of G
□ T_G(2, 1) counts the number of spanning forests in G
□ T_G(0, 2) counts the number of strong orientations of G
□ T_G(2, 0) counts the number of acyclic orientations of G
Edge contraction is defined and deletion-contraction is introduced in [6]. Combinatorial meaning of the coefficients is introduced in [7]. Universality, properties, and applications are discussed
in [8].
Practically, up-front computation of the Tutte polynomial may be useful when users wish to repeatedly calculate edge-connectivity-related information about one or more graphs.
A. Björklund, T. Husfeldt, P. Kaski, M. Koivisto, “Computing the Tutte polynomial in vertex-exponential time” 49th Annual IEEE Symposium on Foundations of Computer Science, 2008 https://
Y. Shi, M. Dehmer, X. Li, I. Gutman, “Graph Polynomials,” p. 14
Y. Shi, M. Dehmer, X. Li, I. Gutman, “Graph Polynomials,” p. 46
D. B. West, “Introduction to Graph Theory,” p. 84
J. A. Ellis-Monaghan, C. Merino, “Graph polynomials and their applications I: The Tutte polynomial” Structural Analysis of Complex Networks, 2011 https://arxiv.org/pdf/0803.3079.pdf
>>> C = nx.cycle_graph(5)
>>> nx.tutte_polynomial(C)
x**4 + x**3 + x**2 + x + y
>>> D = nx.diamond_graph()
>>> nx.tutte_polynomial(D)
x**3 + 2*x**2 + 2*x*y + x + y**2 + y | {"url":"https://networkx.org/documentation/latest/reference/algorithms/generated/networkx.algorithms.polynomials.tutte_polynomial.html","timestamp":"2024-11-06T04:06:05Z","content_type":"text/html","content_length":"45084","record_id":"<urn:uuid:b8cb645c-284c-4664-b5ec-b4ffa6e3eaf7>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00418.warc.gz"} |
1. Do Van Luu, Higher-order optimality conditions in nonsmooth cone-constrained multiobjective programming, Nonlinear Functional Analysis and Applications, 15 (2010), 381 - 393, Scopus.
2. Bùi Công Cường, , L. H. Son , H. T.M. Chau, Some context fuzzy clustering methods for classification problems. In: Proceeding of the 2010 Symposium on Information and Communication Technology
(SoICT’10), (2010), 34 - 40.
3. Bùi Công Cường, Le Chi Ngoc, Some fuzzy operators with threshold and application to fuzzy association rules in data mining. Advanced Fuzzy Mathematics, 5 (2010), 245 - 282.
4. Bùi Công Cường, N. C. Luong , H. V. Long, Approximation properties of fuzzy systems for multi-variables functions. Pan-American Mathematical Journal, 20 (2010), 95 - 111.
5. Bùi Công Cường, H. V. Long, On the approximate realization of a class of stochastic processes by Spline functions fuzzy systems. Advances in Fuzzy Mathematics, 5 (2010), 47 - 64.
6. Nong Quoc Chinh , Do Ngoc Diep, Course of Differential Geometry, NXB DHQG HN, 2010 (in Vietnamese)
7. Do Ngoc Diep, Huynh Van Duc, Bui Doan Khanh, An algebraic approach to building quantum algorithms, Journal of Math.Appl. (Tap chi Ung dung Toan hoc), VII(2010), 93-110., (in Vietnamese)
8. Huy Tai Ha, Susan Morey, Embedded associated primes of powers of square-free monomial ideals, Journal of Pure and Applied Algebra 214 (2010), 301 - 308, preprint arXiv:0805.3738, SCI(-E); Scopus.
9. Cung The Anh, Nguyễn Minh Chương, Tran Dinh Ke, Global attractor for the m-semiflow generated by a quasilinear degenerate parabolic equation, Journal of Mathematical Analysis and Applications,
363 (2010), 444–453, SCI(-E); Scopus.
10. Nguyen Minh Chuong, Ha Duy Hung, Maximal functions and weighted norm inequalities on local fields, Applied and Computational Harmonic Analysis 29 (2010), 272–286, Scopus.
11. Nguyễn Minh Chương, Ha Duy Hung, A Muckenhoupt's weight problem and vector valued maximal inequalities over local fields, P-Adic Numbers, Ultrametric Analysis, and Applications 2 (2010), 305–321,
SCI(-E); Scopus.
12. Trương Xuân Đức Hà, Optimality conditions for several types of efficient solutions of set-valued optimization problems, In: Nonlinear Analysis and Variational Problems, Springer (2010), 305-324.
13. J.-C. Yao, Nguyen Dong Yen, Parametric variational system with a smooth-boundary constraint set, In: Variational Analysis and Generalized Differentiation in Optimization and Control Eds.),
Springer Verlag 47 (2010), 205 - 221.
14. Nguyen Xuan Tan, T. T. T. Duong, On the generalized quasi-equilibrium problem of type I and related problem, Advances in Nonlinear Variational Inequalities 13 (2010), 29 - 47.
15. Nguyen Khoa Son, D. D. Thuan, The structured controllability radii of higher order descriptor systems, Vietnam Journal of Mathematics 38 (2010), 373 - 380, Scopus.
16. Nguyen Khoa Son, B. T. Anh, Robust stability of a class of positive Quasi-polynomials in banach spaces, Mathematical Notes 88 (2010), 651 - 661, SCI(-E); Scopus.
17. Phạm Hữu Sách, L.-J. Lin, Systems of generalized qusivariational inclusion problems with weak convexity and weak continuity and variants of set-valued vector Ekeland variational principle, In:
Proceedings of the 9th International Conference on Fixed Point Theory and Its Applications (2010),115 - 129.
18. Ho Dang Phuc, S. Graner, M. K. Allvin, D. L. Huong, G. Krantz and I. Mogren, Adverse perinatal and neonatal outcomes and their determinants in rural Vietnam 1999 - 2005, Paediatric and Perinatal
Epidemiology (2010), 1 - 11, SCI(-E); Scopus.
19. Vu Ngoc Phat, P. Niamsup, A novel exponential stability condition of hybrid neural networks with time-varying delay, Vietnam Journal of Mathematics 38 (2010), 341 - 351, Scopus.
20. Vu Ngoc Phat, L. V. Hien, Robust stabilization of linear polytopic control systems with mixed delays, Acta Mathematica Vietnamica 35 (2010), 427 - 438, Scopus.
21. M. Latapy, Phan Thi Ha Duong, C. Crespelle, N. T. Quy, Termination of multipartite graph series arising from complex network modelisation. In: The 4th Annual International Conference on
Combinatorial Optimization and Applications (COCOA’10) (2010), 1 -- 22.
22. Nguyen Tu Cuong, N. V. Hoang and P. H. Khanh, Asymptotic stability of certain sets of associated prime ideals if local cohomology modules, Communications in Algebra 38 (2010), 4416 -- 4429, SCI
(-E); Scopus.
23. Nguyen Tu Cuong, Doan Trung Cuong, Hoang Le Truong, On a new invariant of finitely generated modules over local rings, Journal of Algebra and its Applications 9 (2010), 959 -- 976, preprint
arXiv:1003.3972, SCI(-E); Scopus.
24. Ha Huy Bang, B. V. Huong, Behavior of the sequence of norms of primitives of a function in Lorentz spaces, Vietnam Journal of Mathematics 38 (2010), 425 -- 433, Scopus.
25. Thai Doan Chuong, J.-C. Yao, Nguyen Dong Yen, Further results on the lower semicontinuity of efficient point multifunctions, Pacific Journal of Optimization, 6 (2010), 405 -- 422, SCI(-E);
26. N. H. Chieu, J.-C. Yao, Nguyen Dong Yen, Relationships between Robinson metric regularity and Lipschitz-like behavior of implicit multifunctions, Nonlinear Analysis: Theory, Methods &
Applications 72 (2010), 3594 -- 3601, SCI; Scopus.
27. X. Q. Yang, Nguyen Dong Yen, Structure and weak sharp minimum of the Pareto solution set for piecewise linear multiobjective optimization, Journal of Optimization Theory and Applications, 147
(2010), 113 -- 124, SCI; Scopus.
28. Ha Huy Vui, Nguyen Hong Duc, Lojasiewicz inequality at infinity for polynomials in two real variables, Mathematische Zeitschrift, 266 (2010), 243 -- 264, SCI(-E); Scopus.
29. Ha Huy Vui, P. T. Son, Reprensentations of positive polynomials and optimization on noncompact semialgebraic sets, SIAM J. Optim., 20 (2010), 3082 -- 3103.
30. Dao Quang Tuyen, On some rate of convergence questions, Studia Scientiarum Math. Hungarica 47 (2010), 373 -- 387.
31. Hoang Tuy, $D(C)$-optimization and robust global optimization, J. Glob. Optim., 47 (2010), 485 -- 501.
32. Do Van Luu, Higher-order optimality conditions in nonsmooth cone-const-rained multiobjective programming, Nonlinear Functional Analysis and Applications 15 (2010), 429 -- 441.
33. Hoang Le Truong, S. Goto, S. Kimura, T. T. Phuong, Quasi-socle ideals and Goto numbers of parameters, Journal of Pure and Applied Algebra, 214 (2010), 501 -- 511, SCI(-E); Scopus.
34. Ngo Viet Trung, J. K. Verma, Hilbert functions of multigraded algebras, mixed multiplicities of ideals and their applications, Journal of Commutative Algebra, 2 (2010), 515 -- 565, SCI(-E);
35. Nguyen Minh Tri, T. T. Khanh, On the analyticity of solutions to semilinear differential equations degenerated on a submanifold, Journal of Differential equations, 249 (2010), 2440 -- 2475, SCI
(-E); Scopus.
36. Nguyen Minh Tri, V. T. T. Hien, Fourier transform and smoothness of solutions of a class of semilinear degenerate elliptic equations with double characteristics, Russian Journal of Mathematical
Physics, 17 (2010), 192 -- 206, SCI(-E); Scopus.
37. Ho Minh Toan, Classification of certain inductive limit type actions on approximate interval algebras, Journal of the Ramanujan Mathematical Society 25 (2010), 329 -- 343, SCI(-E); Scopus.
38. Nguyen Quoc Thang, Equivalent conditions for (weak) corestriction principle for non-Abelian etale cohomology of group schemes, Vietnam Journal of Mathematics, 38 (2010), 89 -- 116, Scopus.
39. Nguyen Quoc Thang, D. P. Bac, On the topology of relative orbits for actions of algebraic groups over complete fields, Proceedings of the Japan Academy, Series A, Mathematical Sciences, 86
(2010), 133 -- 138, SCI(-E); Scopus.
40. Nguyen Quoc Thang, D. P. Bac, On a relative version of a theorem of Bogomolov over perfect fields and its applications, Journal of Algebra, 324 (2010), 1259 -- 1278, SCI(-E); Scopus.
41. Le Cong Thanh, Minimum connected dominating sets in finite graphs, Vietnam Journal of Mathematics, 38 (2010) 157 -- 168, SCI(-E); Scopus.
42. Phan Thien Thach, Duality equation and efficiency conditions in a vector optimization problem, Vietnam Journal of Mathematics 38 (2010), 1 -- 8, Scopus.
43. Nguyen Xuan Tan, L.-J. Lin, Quasi-equilibrium inclusion problems of the Blum-Oettli type and related problems, In: Optimization and Optimal Control, Springer Optimization and Its Applications
2010, 05-119 2010.
44. Nguyen Xuan Tan, T. T. T. Duong, On the generalized quasi-equilibrium problem of type I and related problems, Advances in Nonlinear Variational Inequalities 13 (2010), 29 -- 47, Scopus.
45. Ngô Đắc Tân, 3-arc-dominated digraphs, SIAM Journal on Discrete Mathematics, 24 (2010), 1153 - 1161, SCI(-E); Scopus.
46. Nguyen Khoa Son, B. T. Anh, Robust stability of positive linear systems in Banach spaces, Journal of Difference Equations and Applications, 16 (2010), 1447 -- 1461, SCI(-E); Scopus.
47. Nguyen Khoa Son, B. T. Anh, The robustness of strong stability of posititive homogeneous difference systems under parameter perturbations, Numerical Functional Analysis and Optimization, 31
(2010), 97 -- 111, SCI(-E); Scopus.
48. Nguyen Khoa Son, D. D. Thuan, The structured distance to uncontrollability under multi-perturbations: an approach using multi-valued linear operators, Systems and Control Letters, 59 (2010), 476
-- 483, SCI(-E); Scopus.
49. Nguyen Khoa Son, B. T. Anh, Robust stability of delay difference systems under fractional perturbations in infinite-dimensional spaces, International Journal of Control, 83 (2010), 498 -- 505,
SCI(-E); Scopus.
50. Doan Thai Son, A. Kalauch, S. Siegmund and F. R. Wirth, Stability radii for positive linear time-invariant systems on time scales, Systems and Control Letters, 59 (2010), 173 -- 179, SCI(-E);
51. Phạm Hữu Sách, Le Anh Tuan and Nguyen Ba Minh, Approximate duality for vector quasi equilibrium problems and applications, Nonlinear Analysis: Theory, Methods & Applications, 72 (2010), 3994 --
4004, SCI(-E); Scopus.
52. Phạm Hữu Sách, Le Anh Tuan and G. M. Lee, Upper semicontinuity in a parametric general variational problem and application, Nonlinear Analysis: Theory, Methods & Application, 72 (2010), 1500 --
1513, SCI(-E); Scopus.
53. Phạm Hữu Sách, Le Anh Tuan, Sensitivity in mixed generalized vector quasiequilibrium problems with moving cones, Nonlinear Analysis: Theory, Methods & Applications, 73 (2010), 713-724, SCI(-E);
54. Phạm Hữu Sách, Le Anh Tuan and G. M. Lee, Upper semicontinuity result for the solution mapping of a mixed parametric generalized vector quasiequilibrium problem with moving cones, Journal of
Global Optimization, 47 (2010), 639 -- 660, SCI(-E); Scopus.
55. Phạm Hữu Sách, L. J. Lin and Le Anh Tuan, Generalized vector quasi-variational inclusion problems with moving cones, Journal of Optimization Theory and Applications, 147 (2010), 607 -- 620, SCI
(-E); Scopus.
56. Ta Duy Phuong, M. V. Bulatov and N. P. Rahvalov, Numerical solution boundary problem for linear differential-algebraic equations of second order, J. Middle Volga Math. Soc.6 (2010), 405 -- 422.
(In Russian).
57. Ho Dang Phuc, M. K. Allvin, S. Graner, B. Hojer and A. Johansson, Regnancies and births among adolescents: A population-based prospective study in rural Vietnam, Sexual & Reproductive Healthcare,
1 (2010), 15 -- 19.
58. Ho Dang Phuc, G. David, , N. T. K. Chuc and L. Lindholm, Inequality in mortality in Vietnam during a period of rapid transitions, Social Science & Medicine, 70 (2010), 232 -- 239, SCI(-E);
59. Ho Dang Phuc, N. Q. Hoa, N. V. Trung, M. Larsson, B. Eriksson, N. T. K. Chuc and C. S. Lundborg, Decreased streptococcus pneumoniae susceptibility to oral antibiotics among children in rural
Vietnam: a community study, BMC Infectious Diseases, 10 (2010) 85,SCI(-E); Scopus.
60. Ho Dang Phuc, N. X. Thanh and N. T. K. Chuc, Migration and under five morbidity in Bavi, Vietnam, In: The Dynamics of Migration, Health and Livelihoods, INDEPTH Network Perspectives, Ashgate
Publishing, London 2009, 169 - 182.
61. Hoang Xuan Phu, Minimizing convex functions with bounded perturbations, SIAM Journal on Optimization, 20 (2010), 2709-2729, SCI(-E); Scopus.
62. Hoang Xuan Phu, Global infimum of strictly convex quadratic functions with bounded perturbations, Mathematical Methods of Operations Research, 72 (2010), 327 -- 345, SCI(-E); Scopus.
63. Vu Ngoc Phat, V. Jeyakumar, Stability, stabilization and duality for linear time-varying systems, Optimization, 59 (2010), 447 -- 460, SCI(-E); Scopus.
64. Vu Ngoc Phat, P. T. Nam and H. M. Hien, Asymptotic stability of linear state-delayed neutral systems with polytope type uncertainties, Dynamic Systems and Applications 19 (2010), 63 -- 74, SCI
(-E); Scopus.
65. Vu Ngoc Phat, Switched controller design for stabilization of nonlinear hybrid systems with time-varying delays in state and control, Journal of the Franklin Institute, 347 (2010), 195 -- 207,
SCI(-E); Scopus.
66. Vu Ngoc Phat, Q. P. Ha and H. Trinh, Parameter-dependent $H_\infty$ control for linear time delay polytopic systems, Journal of Optimization Theory and Applications, 147 (2010), 58 -- 70, SCI
(-E); Scopus.
67. Vu Ngoc Phat, $H_\infty$ control for nonlinear time-varying delay systems with polytopic type uncertainties, Nonlinear Analysis: Theory, Methods and Applications, 72 (2010), 4254 -- 4263, SCI
(-E); Scopus.
68. Vu Ngoc Phat, P. T. Nam, Exponential stability delayed Hopfield neural networks with various activation functions and polytopic uncertainties, Physics Letters A, 374 (2010), 2527 -- 2533, SCI
(-E); Scopus.
69. Vu Ngoc Phat, H. Trinh, Exponential stabilization of neural networks with various activation functions and mixed time-varying delays, IEEE Trans. Neural Networks, 21 (2010), 1180 -- 1185 .
70. Le Dung Muu, D. X. Luong, Combining the projection method and the penalty function to solve the variational inequalities with monotone mappings, International Journal of Optimization. Theory
Methods and Applications, 2 (2010), 124–137.
71. Le Dung Muu, T. D. Quoc, One step from DC optimization to DC mixed variational inequalities, Optimization, 59 (2010), 63 -- 76, SCI(-E); Scopus.
72. Le Dung Muu, L. T. H. An, P. D. Tao and N. C. Nam), Methods for optimization over the efficient and weakly efficient sets of an affine fractional vector optimization program, Optimization, 59
(2010), 77 -- 93, SCI(-E); Scopus.
73. Ha Huy Khoai, On complex analysis in Vietnam, Acta Math. Vietnamica, 35 (2010), 1 -- 6, Scopus.
74. Tran Thi Thu Huong, D. Hefetz, A. Saluz, An application of the combinatorial Nullstellensatz to a graph labelling problem, Journal of Graph Theory, 65 (2010), 70 -- 82, SCI(-E); Scopus.
75. Le Tuan Hoa, Do Hoang Giang, On local cohomology of a tetrahedral curve, Acta Math. Vietnamica, 35 (2010), 229 -- 241, Scopus.
76. Le Tuan Hoa, Nguyen Duc Tam, On some invariants of a mixed product of ideals, Archiv der Mathematik, 94 (2010), 327 -- 337, SCI(-E); Scopus.
77. Le Tuan Hoa, Tran Nam Trung, Partial Castelnuovo-Mumford regularities reduction number of sums and intersections of monomial ideals, Mathematical Proceedings of the Cambridge Philosophical
Society, 149 (2010), 229 -- 246, SCI(-E); Scopus.
78. Le Tuan Hoa, M. Hellus and J. Stueckrad, Castelnuovo-Mum-ford regularity and reduction number of some monomial curves, Proceedings of the American Mathematical Society, 138 (2010), 27 -- 35, SCI
(-E); Scopus.
79. Dinh Nho Hao, Pham Minh Hien, T. Johansson and D. Lesnic, A variational method for a Cauchy problem for elliptic equations, Journal of Algorithms and Computational Technology, 4 (2010), 89 -- 119
, Scopus.
80. Dinh Nho Hao, T. N. T. Quyen, Convergence rates for Tikhonov regularization of coefficient identification problems in Laplace-type equations, Inverse Problems, 26 (2010), 23p, SCI(-E); Scopus.
81. Dinh Nho Hao, N. V. Duc and D. Lesnic, Regularization of parabolic equations backward in time by a non-local boundary value problem method, IMA Journal of Applied Mathematics, 75 (2010), 291 --
315, SCI(-E); Scopus.
82. Phung Ho Hai, H. Esnault, Two small remarks on Nori fundamental group scheme, In: Advanced Studies in Pure Mathematics, 60 (2010), 237 -- 243.
83. Trương Xuân Đức Hà, The Ekeland variational principle for Henig proper minimizers and super minimizers, Journal of Mathematical Analysis and Applications, 364 (2010), 156 -- 170, SCI(-E); Scopus.
84. L. M. Ha, Phan Thi Ha Duong, Order structure and energy of conflicting chip firing game, Acta Math. Vietnamica., 35 (2010), 289 -- 301.
85. N. N. Doanh, Phan Thi Ha Duong, N. T. N. Anh, A. Drogoul and J. D. Zucker, Disk graph-based model: a graph theoretical approach for linking agent-based model and dynamical systems, In:
Proceedings of IEEE-RIVF International Conference on Computing and Communication Technologies, (2010), 254 -- 257.
86. L. M. Ha, N. A. Tam, Phan Thi Ha Duong, Algorithmic aspects of the reachability of conflicting chip firing game, Advances in Intelligent Information and Database Systems,283 (2010), 359 -- 370.
87. Phan Thi Ha Duong, Tran Thi Thu Huong, On the stability of sand piles model, Theoretical Computer Science, 411 (2010), 594 -- 601.
88. Do Ngoc Diep, H. D. Ton, On the electric-magnetic Goddard-Nuyts-Olive duality, Vietnam J. Math, 37 (2009), 457 -- 462.
89. Do Ngoc Diep, A quantization procedure of fields based on geometric Langlands correspondence, International J. of Mathematics and Mathematical Sciences, 2009 (2009), Article ID: 749361, 14.
90. Nguyen Tu Cuong, L. T. Nhan and N. T. K. Nga, On pseudo supports and non-Cohen-Macaulay locus of finitely generated modules, J. Algebra,323 (2010), 3029 -- 3038.
91. Doan Trung Cuong, Hodge cohomology of étale Nori finite vector bundles, Int. Math. Res. Not., No, 2 (2010), 320 -- 333.
92. Nguyen Dinh Cong, M. V. Bulatov, V. K. Gorbunov and Ju. V. Martynenko, Variational approaches to numerical solution of differential algebraic equations, Computational Technologies, 15 (2010), 3
-13. (In Russian)
93. Nguyen Dinh Cong, N. T. The, Stochastic differential-algebraic equations of index 1, Vietnam J. Math, 38 (2010), 117 - 131.
94. Ha Huy Bang, V. N. Huy, Behavior of the sequence of norms of primitives of a function, J. Approx. Theory, 162 (2010), 1178- 1186.
95. Phan Thanh An, D. T. Giang and N. N. Hai, Some computational aspects of geodesic convex sets in a simple polygon, Numerical Functional Analysis and Optimization, 31 (2010), 221 -231
96. Phan Thanh An, Method of orienting curves for determining the convex hull of a finite set of points in the plane, Optimization, 59 (2010), 175 - 179
97. Phan Thanh An, Reachable grasps on a polygon of a robot arm: finding convex ropes without triangulation, International Journal of Robotics and Automation, 4 (2010), 304 - 310.
98. Ho Dang Phuc, Nguyen Xuan Thanh, Curt Lofgren, Nguyen Thi Kim Chuc and L. Lindholm, An assessment of the implementation of the Health care funds for the poor policy in rural Vietnam, Health
Policy, 98 (2010), 58 -- 64. | {"url":"http://math.ac.vn/en/component/staff/?task=showPrint&year=2010","timestamp":"2024-11-11T00:34:00Z","content_type":"application/xhtml+xml","content_length":"75814","record_id":"<urn:uuid:ec73f609-147a-4e20-bf86-f46a885d301b>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00027.warc.gz"} |
INMO Class 2 Model question papers
Model question papers for International Mathematical Olympiad Exam for Standard II Students. The Real exam structure for INMO Class 2 exam includes
The actual test paper has 50 questions.
Time allowed : 60 minutes.
There are 3 sections, 20 questions in section I ( 20 in section II and 10 in section III.
Section I : Logical Reasoning,
Section II : Mathem?iirai Reasoning &
Section III : Everyday Mathematics
Numerals and number name, Addition, Multiplication, Division, Fractions, Money, Length (conversions), Weight, Capacity, Time. Point, Line and plane Figures.
10-10-2014, 07:52 AM
Thank you for inmo previous year question paper. Please post more papers of international maths olympiad exam of 2010, 2011 and 2012 years
12-02-2014, 12:06 AM
(09-23-2014, 08:52 AM)techofficer Wrote: Model question papers for International Mathematical Olympiad Exam for Standard II Students. The Real exam structure for INMO Class 2 exam includes
The actual test paper has 50 questions.
Time allowed : 60 minutes.
There are 3 sections, 20 questions in section I ( 20 in section II and 10 in section III.
Section I : Logical Reasoning,
Section II : Mathem?iirai Reasoning &
Section III : Everyday Mathematics
Numerals and number name, Addition, Multiplication, Division, Fractions, Money, Length (conversions), Weight, Capacity, Time. Point, Line and plane Figures.
Please email me last years imo question paper for class 2 if you have at sweta.chandna@gmail.com
Thanks and Regards,
09-26-2015, 05:11 PM
(09-23-2014, 08:52 AM)techofficer Wrote: Model question papers for International Mathematical Olympiad Exam for Standard II Students. The Real exam structure for INMO Class 2 exam includes
The actual test paper has 50 questions.
Time allowed : 60 minutes.
There are 3 sections, 20 questions in section I ( 20 in section II and 10 in section III.
Section I : Logical Reasoning,
Section II : Mathem?iirai Reasoning &
Section III : Everyday Mathematics
Numerals and number name, Addition, Multiplication, Division, Fractions, Money, Length (conversions), Weight, Capacity, Time. Point, Line and plane Figures.
kindly post more papers for class II | {"url":"https://educationobserver.com/forum/showthread.php?tid=17155","timestamp":"2024-11-03T14:07:49Z","content_type":"application/xhtml+xml","content_length":"37495","record_id":"<urn:uuid:13d1f077-5bb8-40f9-8064-bcd020d2b800>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00279.warc.gz"} |
List of Calcullators (by family)
List of Calcullators (by type of operation)
Calcullators grid
ASCII-2-number converter + + + +
ASCII/HEX/HTML table + + +
Add numbers (columnar addition) + +
Angle +
Area +
BMI + +
Bandwidth + +
Binary prefixes + +
Chinese zodiac +
Compound interest + + +
Cost of area + + + +
Cost of electricity + + +
Data storage + +
Deposit (investment) + +
Energy + +
Fractions + +
Fractions: 4 operations + +
Fractions: adding and subtracting + +
Fractions: compare + +
Fractions: inverse (reciprocal) + +
GCD + +
HEX/DEC table + +
IP calculator (networks/subnets) + +
ISO 8859-2 (Latin-2) table + + +
Is ID Number Correct + +
LCM + +
Length(size) + +
Loan + +
MTU table + +
Mass(weight) + +
Millionaire calculator + +
Momentum + +
Multiplication table + +
Percentage + +
Polish OFE retirement + + +
Polish ZUS retirement + + +
Polish salary #1 (yearly) + + +
Power + +
Prime number + +
Resistors color codes + +
SI prefixes + +
Speed (velocity) + +
Temperature + +
Tire codes - load index + +
Tire codes - speed ratings + +
VAT(tax) + +
Velocity-distance-time + +
Volume (capacity) + +
Polish "law interest" + +
Polish "tax interest" + +
Polish ZUS payments + + +
Polish car travel in job + + +
Polish court fees + +
Polish earnings #2 (tax↔no tax) + + +
Polish earnings #3 (written order) + + +
Polish earnings during sickness + + +
Polish investments amortization + +
Polish notarial wages + + +
Polish real property cost + + +
Polish vacation days + +
Polish work period + + +
Adding days to date + +
Life time + +
Sleep time + + +
State pension age + + +
The difference between two dates + +
Time to holidays + +
Time to new year + +
Time to school break + +
Time: Date and time formats + + +
Wedding anniversary + + +
Work time + + +
Zodiac sign + +
Full list of Calcullas (alphabetically)
• ASCII-2-number converter
The converter of any ASCII or Unicode text to numbers. Allows for many format-related modifications of output. Numbers can be hexadecimal, decimal or binary. Setting separators (commas,
linebreaks), dividing numbers to groups (ex. put "enter" after each 4 items) - is also "one-click-easy". Supports windows/linux/mac line-break codes.
• ASCII/HEX/HTML table
The whole set of 127 ASCII characters. Table shows decimal(DEC), hexadecimal(HEX), octal(OCT) and binary(BIN) indexes, but also HTML entities (in 3 different formats), ANSI-C entities and ASCII
• Add numbers (columnar addition)
Calculator to add numbers. It displays the sum of any given numbers. It also displays columnar addition of these numbers, the carrying (regrouping), partial sums etc. Can be useful for primary
school students (learning how to do columnar addition), for financial operations (can be set up to display dollar/cent format) and... for any other addition related purpose.
• Angle
Angle units converter. Converts radians, degrees, turns and many more.
• Area
Area units converter. Converts square meters, square foots, acres, hectars and about 50 other units.
• BMI
Calculator for finding out the BMI (Body Mass Index). Just enter your height and weight. The result is the BMI factor itself, but also the interpretation of the factor by WHO description (are you
starving? or are you overweight?). It accepts metric units (centimeters and kilograms) or US-like units (feets, inches and pounds).
• Bandwidth
Bandwidth units converter. Converts KB/s, Mbps etc. All known units of bandwidth in two different bases: 1000 and 1024.
• Binary prefixes
Binary prefixes - kibi, mebi, gibi, tebi etc.
• Chinese zodiac
Calculator for finding chinese zodiac sign. Just give your date of birth - then the calculator will find chinese zodiac sign and zodiac elemental.
• Compound interest
Calculator forecasts future value of your money after applying inflation and/or rate of interest.
• Cost of area
The cost-of-area calculator. It finds the price for a piece of land/property/flat/floor, a cost of painting the wall, a quantity of seed you need to plant your lawn... The expenditure of anything
that depends on area in many different units!
• Cost of electricity
Online calculator of electric energy cost. First you set the price for a 1kWh (one kilo-watt). Then you specify all the electric devices you use, and how much of time they are used daily. The
calculator computes the yearly, montly and daily usage of an electric energy, and its overall cost... This can be used for household computation, but also for any business costs estimation.
• Data storage
Data units converter. Converts bits, bytes, megabytes, gigabytes itd. All known units of data in two different bases: 1000 and 1024. This calculator handles some less known data units: nibbles,
octets, pixels (24/48-bit), words, quads, long-words etc.
• Deposit (investment)
Deposit income calculator. Takes your investment amount, nominal annual interest rate, deposit time and some more settings and calculates your income. Shows period-by-period income capitalization
• Energy
Energy units converter. Converts joules, calories, many physical, british, american and time related units.
• Fractions
Fraction explorer - it displays info related to given fraction. Simply enter a fraction and get equal proper fraction, improper (top-heavy) fraction and simplified fraction. Displays also
numerator and denominator factors.
• Fractions: 4 operations
Calculations on fractions - it performs operations on two given fractions. Simply enter two fractions and get them added, subtracted, multiplied and divided by each other. You will get sum,
difference, product and quotient of these two.
• Fractions: adding and subtracting
Calculator shows how to add or subtract two fractions step by step. Simply enter two fractions, select add or subtract and get list of all partial steps needed to compute result. Train your math
with calculla!
• Fractions: compare
Calculator compares two fractions and tells you if they are equal or different. If given fractions are different, the calculator will let you know which one is greater, which one is smaller and
what is the difference between them.
• Fractions: inverse (reciprocal)
Calculator finds multiplicative inverse of given fraction or number.
• GCD
Greatest Common Divisor (GCD) calculator - solves GCD for given numbers, but also displays prime dividers (in a school-like way). So, you know how the solution is found. It can find the GCD for 3
numbers too !
• HEX/DEC table
Lookup table (4096 entries) for fast manual conversion of decimal to hex, or hex to decimal numbers.
• IP calculator (networks/subnets)
Online utility for IP address calculations including netmask, broadcast and network addresses, wildcard mask, usable ranges, address format conversion etc. Accepts IP-s in hex/dec and also as one
unsigned int number.
• ISO 8859-2 (Latin-2) table
The whole set of ISO 8859-2 (Latin-2) characters. Table shows decimal(DEC), hexadecimal(HEX), octal(OCT) and binary(BIN) indexes, but also HTML entities (in 3 different formats), ANSI-C entities,
unicode (HEX, OCT and DEC) and textual descriptors.
• Is ID Number Correct
The online calculator for checking correctness of ID numbers. It validates: IBAN (bank account number), EAN (article), ISBN (book), ISMN (music), ISSN (serial). Just enter the code (with or
without any dots, spaces, slashes etc.) and in a moment you will know, if it is a valid code and which one of them.
• LCM
Least Common Multiple (LCM) calculator - finds LCM for up to three given numbers and shows process of dividing by primes as for school-like interpretation.
• Length(size)
Length units converter - converts units between kilometers, meters, decimeters, angstroms, miles, feet, inches, american/imperial units, nautical (sea) units and astronomical units
• Loan
Loan calculator - interest, monthly payments, principal part, one time fees. Amortizing and term loan considered.
• MTU table
MTU (Maximum Transmission Unit) values assigned to given network type.
• Mass(weight)
Mass(weight) units converter - converts units between metric(tons, kilograms, etc.), Avoirdupois/US(pounds, ounces) and troy systems
• Millionaire calculator
Calculator simulates raising your first million by systematic saving.
• Momentum
Online calculator for momentum. Computes values of momentum, mass or velocity using the momentum formula.
• Percentage
Calculator finds solutions to common percentage problems. It's done in easy way: more like stories and everyday situations, less like math language.
• Polish OFE retirement
Online calculator computes the pension received from both ZUS (polish retirement institution) and Open Retirement Funds.
• Polish ZUS retirement
This online calculator computes the amount of pension received from Polish retirement institution called ZUS.
• Polish salary #1 (yearly)
The take-home salary calculator for Poland. It takes your gross income and calculates all your take-home (netto) earnings month by month. It also displays all parts of salary: polish national
insurance (ZUS), income taxes and other costs of working as an employee in Poland.
• Power
Power units converter. This calculator converts between horsepower, wats and over a dozen other power units.
• Prime number
Prime numbers and factors online calculator (really fast). Give it an integer number, and you got the answers: Is the number prime? If not, what are prime factors? What are results of dividing by
prime factors ? What is next/previous prime number ? All of those questions answered here. Used computation method makes this prime number calculator one of the fastest in the web.
• Resistors color codes
Calculator decodes parameters of resistor (resistance value, tolerance, temperature coefficient) painted as colored bands on resistor and vice versa.
• SI prefixes
A metric units prefixes (SI prefixes).
• Speed (velocity)
Speed (velocity) units converter - converts units between metric (kilometres per hour, meters per second and many more), british-american (miles per hour, foot per second and many more), nautical
(knots) and some other (machs, speed of light etc.)
• Temperature
Temperature units converter. Easy conversion of Kelvin, Celsius, Fahrenheit, Rankine and other temperature related (heat) units.
• Tire codes - load index
The table of tire load index codes. And also explanation of meaning of tire markings (bit by bit) with simple infographic.
• Tire codes - speed ratings
The table of tire speed rating codes. And also explanation of meaning of tire markings (bit by bit) with simple infographic.
• VAT(tax)
Online VAT tax calculator (VAT is Value Added Tax). Computes net amount, gross amount and tax value depending of given tax rate (handles VAT for many countries and goods types). Really simple tax
calculator !
• Velocity-distance-time
Online calculator for velocity. Computes values of velocity, distance or time using the average velocity formula.
• Volume (capacity)
Converter of volume units (also capacity units). Supports 110+ different units used over the world. Gallons, litres, cubic meters, pints, barrels and 100+ other !
• Polish car travel in job
Online calculator for finding an amount of cash that can be repaid by an employer for using private car.
• Polish court fees
Online calculator computes cost (fees) of starting private case in Polish court.
• Polish earnings #2 (tax↔no tax)
The earnings, taxes and other costs of working as an employee in Poland. The easy conversion between brutto↔netto (tax↔no tax earnings).
• Polish vacation days
Online calculator for calculating number of vacation days for every year of work in Poland.
• Adding days to date
Calculator computes what day is after adding (or subtracking) given number of days to your date.
• Life time
Calculator computes the number of years, months, days, hours, minutes and seconds, which have passed since your date of birth. In other words, it calculates how long you live.
• Sleep time
Calculator computes the number of years, months, days, hours, minutes and seconds, which you slept during your life.
• State pension age
Calculator checks when you can retire.
• The difference between two dates
Calculator computes how many days (hours, minutes, seconds) passed between two dates.
• Time to holidays
Calculator computes how many days (hours, minutes, seconds) remained to coming holidays within next year.
• Time to new year
Calculator computes how many days (hours, minutes, seconds) remained to celebrate new year.
• Time to school break
Calculator computes how many days (hours, minutes, seconds) remained to coming holidays or the end of school year.
• Time: Date and time formats
Calculator converts date and time from one format to another. Supports number of calendars (Julian, Islamic, Persian, Indian) and also computer time (UNIX time).
• Wedding anniversary
Put in your wedding date and Calculla will show you list of your wedding anniversaries.
• Work time
Calculator computes number of work hours in given period.
• Zodiac sign
Calculator calculates what is your zodiac sign depending on your date of birth. | {"url":"http://v1.calculla.com/calculatorsGrid?menuGroup=Math","timestamp":"2024-11-12T21:57:03Z","content_type":"application/xhtml+xml","content_length":"103930","record_id":"<urn:uuid:72a260b8-0458-4555-9ea3-382ee00adb2f>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00066.warc.gz"} |
Words of Wisdom from Professors…
…on the Importance of Computing for Mathematicians
“Computo, ergo sum—” Jeffrey Lagarias
David Speyer: “Three of my students solved major parts of their thesis problems after I insisted they create code capable of checking the first nontrivial examples of their claims.”
John Stembridge: “I think the most valuable thing that computation can provide for a pure mathematician is the confidence that can come from seeing how special cases play out. If you are confident
that what you are trying to prove is true, that can make all the difference.”
Sarah Koch: “I am constantly generating data and drawing pictures with various computer programs to better understand the spaces I encounter in my research. The programs I use range from Maple and
Mathematica, to dynamical systems software. Many of my new ideas and questions are inspired by fascinating geometric phenomena that I discover in these computer pictures.”
Michael Zieve: “I use Magma every day in doing my research, both for doing experiments to see what might be true and then sometimes for producing computer-aided proofs of various results. To first
approximation, Magma has all known algorithms for algebra, number theory, algebraic geometry, linear algebra and combinatorics already built into it, so that if someone somewhere in the world has
figured out how to compute something, then you too can compute it easily using Magma. For instance, you can easily compute the irreducible characters of the automorphism group of a curve, etc.”
Jeff Lagarias: “If you can compute, you can do mathematical experiments for yourself, gather data, formulate hypotheses, and test them, and make discoveries. Each skill you have widens the possible
opportunities you can seize. The sooner you learn it, the sooner you can use it.”
“My advisor Harold Stark succeeded in part because he could compute. For starters, his PhD thesis. Later he formulated conjectures, which made startling predictions, only confirmed by computer
experiments, which caught people’s attention. Some of these examples, verified computationally to hundreds of decimal places, are still not proved 40 years later.”
Kartik Prasanna: “In certain areas of number theory, computing can be extremely valuable. I work on special values of L-functions and performing computations can provide incredible insights. Many of
the major conjectures in the subject (eg. the Birch and Swinnerton-Dyer conjecture) were suggested by computer calculations. Moreover, I find that trying to implement something on a computer is a
good way to check if you really understand it … since the slightest error in understanding will typically cause your computer program to output garbage. And it can be fun to play to around with
explicit numerical examples! Not just do they solidify your understanding, but they make things concrete and remind you that in the end, deep theorems in number theory are often reflected in rather
simple, concrete statements about numbers. I highly recommend that all my students be familiar and comfortable with SAGE and MAGMA. ”
Victoria Booth: “I use Matlab all the time in my research.”
Wei Ho: “The Birch and Swinnerton-Dyer Conjecture originally arose from computations about elliptic curves!”
Jenny Wilson: “In an ongoing project, one of my co-authors used MAGMA and the lrs Vertex Enumeration/Convex Hull package to perform computations in H^2 (GL_2(O)) for certain rings O.”
Alex Wright: “At least in my case, if I know a graduate student can program, this opens up new avenues,
possibly allowing them to work [with me] on a more original thesis problem.”
“The best example I know of amazing use of computing in math research is in this paper of Kontsevich and Zorich, quote: We started from computer experiments with simple one-dimensional ergodic
dynamical systems called interval exchange transformations. Correlators in these systems decay as a power of time. In the simplest non-trivial case the exponent is equal to 1/3. We found a formula
connecting characteristic exponents with explicit integrals over moduli spaces of algebraic curves with additional structures. Moreover, these integrals can be interpreted as correlators in a
topological string theory. Also a new analogy arose between ergodic theory and complex algebraic geometry.“
Ralf Spatzier: “Without a doubt, computers have revolutionized pure mathematics as they allow us to study complicated examples in ways that escape mere mortals. This allowed us to see phenomena and
formulate conjectures beyond any previous dreams. Acquiring at least rudimentary skills in some advanced programming language will allow mathematicians to engage in such experiments.”
Wei Ho: “In my field, it’s very useful to be able to code. Magma and sage (open source) are the main tools. I have a joint paper that’s 100% computational (creating a giant database and computing
invariants for the entries).
There is a conference every two years called the “Algorithmic Number Theory Symposium (ANTS)” — almost all of the talks / papers accepted are based on computations.
One of the large Simons collaboration grants right now is in “Arithmetic geometry, number theory, and computation.” And they have hired postdocs who code instead of teach!”
Jeff Lagarias: “The computer is a more dependable assistant than any human.”
Harm Derksen: “I have used programming in research and so have many of the graduate students I have worked with. Languages that we have used for math include maple, matlab, magma, gap, python, C++,
Macaulay2. We have used them for computations, testing conjectures and applications.”
Alexander Barvinok: “Numbers do not lie.” | {"url":"https://sites.lsa.umich.edu/math-graduates/best-practices-advice/computing/words-of-wisdom-from-professors/","timestamp":"2024-11-05T17:39:44Z","content_type":"text/html","content_length":"134556","record_id":"<urn:uuid:4fdfcefa-ff31-4c26-8fbd-70be8af95039>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00847.warc.gz"} |
SMALL Function in Excel (Formula, Examples) | How to use SMALL?
Updated August 23, 2023
SMALL Function in Excel (Table of Contents)
SMALL Function in Excel
A small function in Excel is used for getting the smallest number from the selected range of numbers with the help of the Kth position in the range. For example, we have 10 different numbers, and we
need to find the smallest number out of that; by using the Small function, we can get the 1^st or 2^nd or any Kth smallest number out of those 10 selected numbers.
SMALL Formula in Excel:
Below is the SMALL Formula in Excel :
The SMALL function has two arguments, i.e. array, k. Both are required arguments.
• Array: This is the range of cells you are selecting it as the source data to find the K value.
• K: This is the K^th position of the number. From the list, it gives the bottom value.
In this function, the range should not be empty, and we need to specify both arguments.
How to Use SMALL Function in Excel?
This Function in Excel is very simple and easy to use. Let us now see how to use this SMALL Function in Excel with the help of some examples.
Example #1
Below are the scores of the students on a test. From the below-given data, find the smallest and the 3rd smallest scores.
If we find the smallest number, we can simply apply MIN Function. If you look at the below image, both the formulas return the same value as the smallest number in the given list.
However, MIN Function stops there only. It cannot find the 2^nd, 3^rd, and 4^th smallest numbers. In such cases, SMALL can give us the Kth position number.
Find the 3^rd Smallest Number.
We need to specify the number in the Kth argument to find the third smallest score or number from the list.
The formula is =SMALL (B2:B11, 3), and it reads like this:
“In the range, B2:B11 find the 3^rd largest value.”
So the result will be :
Example #2
Below is the data for a cycle race. From this list, you need to find the winner. Data includes names, start time, and end time.
From this list, we need to find who has taken the least time to complete the race.
Step 1: Find the total time taken.
The time taken to complete the race arrived by deducting the start time by the end time. The image below shows the actual time each one takes to complete the race.
Step 2: Now apply the SMALL Function to get the winner.
So the result will be :
It is a bit of a herculean task if the list is long. But we can just name the winner using the if condition.
So the result will be :
Similarly, it is applied to other cells in that column to get the desired output.
If the value arrived by a SMALL function is equal to the actual time taken value, we call it as Winner or Better Luck Next Time.
Example #3
We can use the SMALL Function along with other functions. From the below-given list, find the sum of the bottom 3 values for Week 2.
Apply the below SMALL function along with the SUM & VLOOKUP function.
This is an array formula. You need to close the formula by typing Ctrl + Shift + Enter. This would insert the curly brackets before and after the formula.
VLOOKUP returns the value for WEEK 2 specified by the SMALL function for 3 bottom values. Then SUM function will add the bottom values together and return the result as 1988.
Things to Remember
• SMALL Function ignores text values and considers only numerical values.
Result is :
• A SMALL function returns an error if there are no numerical values in the list.
Result is :
• If there are any duplicates, then SMALL considers the first value as the smaller one.
• K should be numeric; otherwise, it returns the error as #VALUE!
• Supplied range should not be empty.
• If we find only the least value, we can use the MIN Function. But it finds only the first smallest value.
• Even though SMALL ignores text values, if there are any errors, it will return the result as #DIV/0!
• We can use SMALL and many other functions to find the Nth values.
• Use practically to get the hint of the SMALL function.
• If you use SMALL with other functions, it becomes an array formula.
Recommended Articles
This has been a guide to SMALL Function in Excel. Here we discuss the SMALL Formula in Excel and how to use a SMALL Function in Excel, with practical examples and a downloadable Excel template. You
can also go through our other suggested articles – | {"url":"https://www.educba.com/small-function-in-excel/","timestamp":"2024-11-12T05:40:18Z","content_type":"text/html","content_length":"355217","record_id":"<urn:uuid:06ecf5da-f1ba-4965-98d1-b8bf872cd9cd>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00534.warc.gz"} |
Psychology week 3 additional worksheet
Provide a response to the following prompts.
Note: Each team member should compute the following questions and submit them to the Learning Team forum. The team should then discuss each team member’s answers to ascertain the correct answer for
each question. Once your team has answered all the questions, submit a finalized team worksheet.
1. When a result is not extreme enough to reject the null hypothesis, explain why it is wrong to conclude that your result supports the null hypothesis.
2. List the five steps of hypothesis testing and explain the procedure and logic of each.
3. A researcher wants to know whether people who regularly listen to radio talk shows are more or less likely to vote in national elections than people in general.
a. State the research hypothesis and null hypothesis
b. Would the researchers use a one- or two-tailed Z test?
4. The general population (Population 2) has a mean of 30 and a standard deviation of 5, and the cutoff Z score for significance in a study involving one participant is 1.96. If the raw score
obtained by the participant is 45, what decisions should be made about the null and research hypotheses?
5. One hundred people are included in a study in which they are compared to a known population that has a mean of 73, a standard deviation of 20, and a rectangular distribution.
a. μM = __________.
b. σM = __________.
c. The shape of the comparison distribution is __________.
d. If the sample mean is 75, the lower limit for the 99% confidence interval is __________.
e. If the sample mean is 75, the upper limit for the 99% confidence interval is __________.
f. If the sample mean is 75, the lower limit for the 95% confidence interval is __________.
g. If the sample mean is 75, the upper limit for the 95% confidence interval is __________.
6. A psychology professor of a large class became curious as to whether the students who turned in tests first scored differently from the overall mean on the test. The overall mean score on the test
was 75 with a standard deviation of 10; the scores were approximately normally distributed. The mean score for the first 20 students to turn in tests was 78. Using the .05 significance level, was the
average test score earned by the first 20 students to turn in their tests significantly different from the overall mean?
1. Use the five steps of hypothesis testing.
2. Figure the confidence limits for the 95% confidence interval.
https://doneassignments.com/wp-content/uploads/2021/08/logo-300x75.png 0 0 admin https://doneassignments.com/wp-content/uploads/2021/08/logo-300x75.png admin2021-08-28 01:20:512021-08-28 01:20:51
Psychology week 3 additional worksheet | {"url":"https://doneassignments.com/2021/08/28/psychology-week-3-additional-worksheet/","timestamp":"2024-11-12T12:58:39Z","content_type":"text/html","content_length":"57618","record_id":"<urn:uuid:10e10d6d-c1cf-40ab-8417-a156bd4ddd63>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00408.warc.gz"} |
Ball Pit CalculatorBall Pit Calculator - Calculator Flares
Ball Pit Calculator
Basic Calculator
Advanced Calculator
Setting up a ball pit is a fantastic way to bring fun and excitement to any space, whether it’s for kids at home or a commercial play area. One common question is: how many balls do you need to fill
your ball pit adequately? The Ball Pit Calculator is a simple tool that helps you calculate the number of balls required for your ball pit, ensuring it’s filled just right.
What is a Ball Pit?
A ball pit is a pool or confined area filled with small, colorful plastic balls. These are commonly found in indoor playgrounds, family entertainment centers, and even some homes. They provide a fun,
sensory experience for children and are also used in therapy settings for sensory integration.
Need of Calculating the Right Number of Balls
Calculating the correct number of balls for your ball pit is crucial for several reasons:
• Safety: Overfilling can create a suffocation hazard, while underfilling may not provide the intended cushioning effect.
• Cost Efficiency: Knowing the exact number helps avoid unnecessary purchases.
• Optimal Fun: The right amount ensures maximum enjoyment without the balls spilling over or the pit looking sparse.
The Basic Formula for Calculating Balls Needed
The formula to calculate the number of balls needed for a ball pit is:
• BBB = Number of Balls
• LLL = Length of the ball pit (ft)
• WWW = Width of the ball pit (ft)
• DDD = Depth of the ball pit (ft)
Guide to Using the Ball Pit Calculator
1. Determine the Dimensions: Measure the length, width, and depth of your ball pit in feet.
2. Apply the Formula: Plug these measurements into the formula.
3. Perform the Calculation: Multiply the length, width, and depth, then divide by 8, and multiply by 500.
4. Verify with a Calculator: Use the ball pit calculator for accuracy.
Example Calculation
Let’s go through an example to demonstrate the calculation:
• Length (LLL) = 5 ft
• Width (WWW) = 10 ft
• Depth (DDD) = 5 ft
Using the formula:
So, you would need approximately 15,625 balls for your ball pit.
Factors to Consider When Calculating Ball Pit Balls
1. Ball Size
The formula assumes standard-sized balls (approximately 2.5 inches in diameter). Using larger or smaller balls will affect the number needed.
2. Shape of the Pit
The formula works best for rectangular pits. Irregular shapes might require a more nuanced approach.
3. Desired Depth of Balls
While the depth of the pit is crucial, the depth to which you want the balls filled is also a consideration. Shallower fill levels need fewer balls.
4. Settling and Compression
Over time, the balls will settle and compress, especially if used frequently. It’s a good idea to account for this by adding an extra 10-15% to your initial calculation.
Try : Profit First Calculator
Common Pitfalls and How to Avoid Them
1. Ignoring Ball Size: Always verify the size of the balls being used.
2. Incorrect Measurements: Double-check your measurements for accuracy.
3. Overfilling: Follow the calculated number to avoid overfilling and potential hazards.
4. Forgetting Settling Factor: Remember that balls will compress over time; consider adding extra to maintain the desired level.
FAQs About Ball Pit Calculations
Q: How deep should a ball pit be?
A: The depth can vary, but typically 2-3 feet is sufficient for a fun experience without creating hazards.
Q: Can I use different-sized balls in the same pit?
A: It’s not recommended as it can affect the overall stability and safety of the pit.
Q: How often should I replace the balls in the pit?
A: Depending on usage, inspect balls regularly for damage and cleanliness, replacing as necessary.
Q: What if I have an irregular-shaped pit?
A: For irregular shapes, calculate the volume for each section separately and sum the totals.
Q: Are there safety standards for ball pits?
A: Yes, always follow safety guidelines and standards, especially if the pit is for commercial use. | {"url":"https://calculatorflares.com/ball-pit-calculator/","timestamp":"2024-11-03T07:38:34Z","content_type":"text/html","content_length":"195252","record_id":"<urn:uuid:1944d4b8-199d-4b61-98e4-664606c44a8b>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00746.warc.gz"} |
What is currying in JavaScript
The Curry Dish Analogy
Currying is a mouthful of a term, no pun intended. If you're new to programming or JavaScript, you might be wondering what this culinary term has to do with writing code. To help you understand,
let's use an analogy. Imagine you're in a restaurant, and you order a curry dish. But instead of serving it all at once, the waiter brings you the ingredients one at a time. First, you get the sauce,
then the meat, and finally, the vegetables. This is somewhat similar to what happens when we curry a function in JavaScript.
Defining Currying
In simple terms, currying is a process in functional programming where a function with multiple arguments is transformed into a sequence of functions, each with a single argument. So, if you have a
function that takes three arguments, currying would break it down into three functions, each taking one argument.
Currying in Action
Let's look at an example. Here's a simple function that adds two numbers together:
function add(a, b) {
return a + b;
If we call add(1, 2), we get 3 as the result. Simple, right?
Now, let's transform this into a curried function.
function curriedAdd(a) {
return function(b) {
return a + b;
With this curried version, we would call it like this: curriedAdd(1)(2), and we would still get 3.
The Why of Currying
You might be wondering why we would want to do this. It seems a bit more complicated, right? In programming, we often solve complex problems by breaking them down into smaller, simpler problems.
Currying allows us to do this by creating smaller, more specific functions from our general ones.
Another Example
Let's look at another example. Suppose we have a function that calculates the total price for a number of items at a specific price:
function totalPrice(price, quantity) {
return price * quantity;
We could call this function with two arguments, like totalPrice(10, 2), and it would give us 20.
Now, let's curry this function:
function curriedTotalPrice(price) {
return function(quantity) {
return price * quantity;
With this curried version, we would call it like this: curriedTotalPrice(10)(2), and we would still get 20. But now, we can also create a new function that's specifically for items that cost $10:
var tenDollarItems = curriedTotalPrice(10);
Now, we can easily calculate the total price for any quantity of $10 items, like tenDollarItems(2), which gives us 20.
Conclusion: The Spice of JavaScript
Currying, like the spice it's named after, adds a unique flavor to JavaScript programming. It might seem a little strange at first, but once you get the taste for it, you'll find it adds a depth and
complexity to your code that's hard to achieve in other ways. Currying allows you to break down complex problems into simpler, more manageable pieces, and can make your code more readable and easier
to debug. So next time you're in the coding kitchen, don't be afraid to add a little curry to your JavaScript dish. Who knows, you might find it's just the ingredient you've been missing. | {"url":"https://www.altcademy.com/blog/what-is-currying-in-javascript/","timestamp":"2024-11-02T17:24:09Z","content_type":"text/html","content_length":"33324","record_id":"<urn:uuid:91e4274f-55ad-46dc-b8c1-02963a27c078>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00280.warc.gz"} |
Joint II
Joint II --- Introduction ---
This module is an exercise on the continuity and differentiability of real functions.
The server generates a function f of one real variable x, containing parameters a[1], a[2],..., which is defined by different formulas on three segments of an interval. Your aim is to find values for
these parameters a[i] such that f is continuous (or continuous and differentiable, following the level of difficulty you have chosen).
One can solve the problem either by Taylor expansions, or by successive derivations. In the difficult cases, the resolution of a system of linear equations is also necessary.
Choose the level of difficulty you want: 1 . 2 . 3 . 4 . 5 . 6 . The most recent version
This page is not in its usual appearance because WIMS is unable to recognize your web browser.
Please take note that WIMS pages are interactively generated; they are not ordinary HTML files. They must be used interactively ONLINE. It is useless for you to gather them through a robot program.
Description: parametrize a function to make it continue or differentiable on 2 points. interactive exercises, online calculators and plotters, mathematical recreation and games
Keywords: interactive mathematics, interactive math, server side interactivity, analysis, algebra, continuity, derivative, limit | {"url":"http://www.designmaths.net/wims/wims.cgi?lang=en&+module=U1%2Fanalysis%2Fjoint2.en","timestamp":"2024-11-13T06:43:40Z","content_type":"text/html","content_length":"4431","record_id":"<urn:uuid:4fc869e3-de1a-463a-98ea-c4495f4281fa>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00635.warc.gz"} |
Incredible Cryptic Quiz Answer Key E-9 Ideas › Athens Mutual Student Corner
Incredible Cryptic Quiz Answer Key E-9 Ideas
Incredible Cryptic Quiz Answer Key E-9 Ideas. You may wish to have students show on a separate paper how they substitute the given value for each variable and determine if it is. Since i don't have a
middle school math pizzazz book in front of me, this will be difficult to answer.
April 2016 Ms. Berry's Math 6 Class from www.cobblearning.net
You may wish to have students show on a separate paper how they substitute the given value for each variable and determine if it is. Since i don't have a middle school math pizzazz book in front of
me, this will be difficult to answer. The cryptic crossword walter g.
Web Algebra With Pizzazz Page 9 Answer Key.
The cryptic crossword walter g. Since i don't have a middle school math pizzazz book in front of me, this will be difficult to answer. You may wish to have students show on a separate paper how they
substitute the given value for each variable and determine if it is.
Write Comment
activity algebra answer answers biology cell chapter chemical chemistry cycle energy free genetics geometry gizmo grade homework icivics ionic lesson math periodic phet photosynthesis pogil practice
problems puzzle questions quiz quizlet regents review search sheet student system table test triangles unit webquest with word worksheet | {"url":"http://athensmutualaid.net/cryptic-quiz-answer-key-e-9-2/","timestamp":"2024-11-03T21:47:34Z","content_type":"text/html","content_length":"126636","record_id":"<urn:uuid:e9384b1e-0100-4716-a6c6-4ac9d560c34f>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00041.warc.gz"} |
Matt thinks that he
has a special relationship with the number 4. In particular, Matt
Matt thinks that he has a special relationship with the number 4. In particular, Matt thinks...
Matt thinks that he has a special relationship with the number 4. In particular, Matt thinks that he would roll a 4 with a fair 6-sided die more often than you'd expect by chance alone. Suppose pp is
the true proportion of the time Matt will roll a 4.
(a) State the null and alternative hypotheses for testing Matt's claim. (Type the symbol "p" for the population proportion, whichever symbols you need of "<", ">", "=", "not =" and express any values
as a fraction e.g. p = 1/3)
(b) Now suppose Matt makes n = 30 rolls, and a 4 comes up 6 times out of the 30 rolls. Determine the P-value of the test:
P-value = | {"url":"https://justaaa.com/statistics-and-probability/62339-matt-thinks-that-he-has-a-special-relationship","timestamp":"2024-11-12T20:20:58Z","content_type":"text/html","content_length":"37566","record_id":"<urn:uuid:49e1da4d-4380-4234-86eb-458b95193662>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00627.warc.gz"} |
Path in a Number Pyramid - Online Shortest/Longest Tool
Search for a tool
Path Search in a Pyramid Triangle
Tool to search path in a number pyramid. Path search in a pyramid triangle allows to find the shortest path or the longest path by traversing the graph (tree) from the root to its leaves or from the
bottom to the top.
Path Search in a Pyramid Triangle - dCode
Tag(s) : Graph Theory
dCode and more
dCode is free and its tools are a valuable help in games, maths, geocaching, puzzles and problems to solve every day!
A suggestion ? a feedback ? a bug ? an idea ? Write to dCode!
Path Search in a Pyramid Triangle
Path search in a Pyramid of Numbers
Answers to Questions (FAQ)
How to find the shortest path?
Go through the pyramid (or the triangle) from top to bottom, adding values that gave a the smallest total respecting a single rule : only go to one of the two numbers immediately below.
Example: ...5...
2nd line: 5+4=9 or 5+8=13, choose the lowest, the path 5->4.
3rd line: 4+9=13 or 4+5=9, choose the lowest, the path 4->5.
4th line: 5+2=7 or 5+7=12, choose the lowest, the path 5->2.
Finally, the shortest path route is (from top to bottom) 5->4->5->2 (which is 16 long) or 2->5->4->5 (from bottom to top)
How to find the longest path?
Go through the pyramid from top to bottom, as for the shortest path, but by adding values that gave the highest total.
How to count possible paths?
Total number of path $ N $ in a pyramid of height $ H $ is given by the formula : $$ N = 2^{H-1} $$
Source code
dCode retains ownership of the "Path Search in a Pyramid Triangle" source code. Except explicit open source licence (indicated Creative Commons / free), the "Path Search in a Pyramid Triangle"
algorithm, the applet or snippet (converter, solver, encryption / decryption, encoding / decoding, ciphering / deciphering, breaker, translator), or the "Path Search in a Pyramid Triangle" functions
(calculate, convert, solve, decrypt / encrypt, decipher / cipher, decode / encode, translate) written in any informatic language (Python, Java, PHP, C#, Javascript, Matlab, etc.) and all data
download, script, or API access for "Path Search in a Pyramid Triangle" are not public, same for offline use on PC, mobile, tablet, iPhone or Android app!
Reminder : dCode is free to use.
Cite dCode
The copy-paste of the page "Path Search in a Pyramid Triangle" or any of its results, is allowed (even for commercial purposes) as long as you credit dCode!
Exporting results as a .csv or .txt file is free by clicking on the export icon
Cite as source (bibliography):
Path Search in a Pyramid Triangle on dCode.fr [online website], retrieved on 2024-11-05, https://www.dcode.fr/path-search-pyramid-triangle
Similar pages
© 2024 dCode — El 'kit de herramientas' definitivo para resolver todos los juegos/acertijos/geocaching/CTF. | {"url":"https://www.dcode.fr/path-search-pyramid-triangle","timestamp":"2024-11-05T06:56:34Z","content_type":"text/html","content_length":"20189","record_id":"<urn:uuid:331d602b-954c-45de-bbda-33e2fd8817a8>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00257.warc.gz"} |
Adding Fractions To Mixed Numbers Worksheet 2024 - NumbersWorksheets.com
Adding Fractions To Mixed Numbers Worksheet
Adding Fractions To Mixed Numbers Worksheet – Innovative add-on drills are fantastic ways to expose college students to algebra methods. These drills have choices for one-second, three-minute, and
several-second drills with custom varieties of 20 to hundred or so problems. Furthermore, the drills may be found in a horizontal file format, with amounts from to 99. These drills can be
personalized to each student’s ability level. That is the best part. Below are a few extra sophisticated supplement drills:
Matter forwards by one particular
Depending on is actually a beneficial technique for creating number truth fluency. Count on a number by addingtwo and one, or three. For example, 5 plus two is equal to 10, and the like. Depending on
a quantity with the addition of one will make the very same end result for small and large figures. These inclusion worksheets involve process on counting on a quantity with equally hands as well as
the quantity line. Adding Fractions To Mixed Numbers Worksheet.
Training multiple-digit supplement by using a variety collection
Wide open number lines are wonderful versions for place and addition value. Within a earlier article we talked about the various intellectual tactics college students may use to provide phone
numbers. Employing a variety line is the best way to record every one of these strategies. In this article we will investigate one method to process multiple-digit addition having a number series.
Allow me to share three strategies:
Training adding doubles
The exercise incorporating doubles with addition phone numbers worksheet could be used to aid kids produce the idea of a doubles truth. A doubles fact is when the same number is added more than once.
If Elsa had four headbands and Gretta had five, they both have two doubles, for example. By practicing doubles with this worksheet, students can develop a stronger understanding of doubles and gain
the fluency required to add single digit numbers.
Practice introducing fractions
A Exercise including fractions with supplement amounts worksheet is really a valuable device to build up your child’s simple familiarity with fractions. These worksheets include several principles
related to fractions, such as evaluating and purchasing fractions. In addition they offer you valuable difficulty-solving tactics. You are able to obtain these worksheets at no cost in PDF format.
The first step is to be certain your child understands the symbols and rules related to fractions.
Exercise including fractions using a number collection
With regards to rehearsing incorporating fractions having a quantity collection, individuals are able to use a small fraction spot benefit pad or perhaps a number range for merged amounts. These help
in complementing portion equations for the solutions. The location benefit mats will have a amount of examples, with the equation composed on the top. Pupils can then choose the response they desire
by punching pockets next to each and every selection. As soon as they have selected the proper solution, the student can bring a cue near the solution.
Gallery of Adding Fractions To Mixed Numbers Worksheet
Adding Mixed Fractions With Different Denominators Worksheets
Adding Fractions Mixed Numbers Worksheet
Adding Mixed Numbers Worksheet Adding Mixed Number Fractions
Leave a Comment | {"url":"https://numbersworksheet.com/adding-fractions-to-mixed-numbers-worksheet/","timestamp":"2024-11-09T19:23:27Z","content_type":"text/html","content_length":"54579","record_id":"<urn:uuid:975fbbaa-344e-456e-b52b-7015691c1885>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00060.warc.gz"} |
Cdf of a discrete random variable and convergence of distributions
Cdf of a discrete random variable and convergence of distributions
• Thread starter Artusartos
• Start date
In summary, the conversation discusses the confusion surrounding the limits of F_{X_n}(x) and F_X(x) at continuity points x of F_x. It is noted that while the graph of F_X(x) is a straight line y=0,
with only x=0 at y=1, the points to the right of zero should not be equal to the limit of F_{X_n}(x) because F_X(x) is always zero at those points, but F_X(x) is 1. The conversation then delves into
the limits at specific points and concludes that the limits under consideration involve n \rightarrow \infty.
In the page that I attached, it says "...while at the continuity points x of [itex]F_x[/itex] (i.e., [itex]x \not= 0[/itex]), [itex]lim F_{X_n}(x) = F_X(x)[/itex]." But we know that the graph of
[itex]F_X(x)[/itex] is a straight line y=0, with only x=0 at y=1, right? But then all the points to the right of zero should not be equal to the limit of [itex]F_{X_n}(x)[/itex], right? Because
[itex]F_X(x)[/itex] is always zero at those points, but [itex]F_X(x)[/itex] is 1? So how do I make sense of that?
Thanks in advance
Last edited:
Artusartos said:
But we know that the graph of [itex]F_X(x)[/itex] is a straight line y=0, with only x=0 at y=1, right?
No, I think [itex] F_X(x) [/itex] is the cumulative distribution, not a density function.
Stephen Tashi said:
No, I think [itex] F_X(x) [/itex] is the cumulative distribution, not a density function.
Oh, ok...
But it's still confusing. What if n=4 (for example)? Then [tex]F_{X_n} = 1[/tex] if [tex]x \geq 1/4[/tex], and [tex]F_{X_n}=0[/tex], when [tex]x < 1/4[/tex], right? So for any x between 0 and 1/4,
the limit at those points is 0, but the limit of [tex]F_X[/tex] at those points is 1...so the limits are not equal, are they?
Artusartos said:
So for any x between 0 and 1/4, the limit at those points is 0,
What limit are you talking about? Something like [itex] lim_{x \rightarrow 1/8} F_{X_4}(x) [/itex] ? I see nothing in the discussion in the book that dealt with that sort of limit. The limits under
consideration involve [itex] n \rightarrow \infty [/itex].
for your question and for attaching the page for reference. The statement you referenced is discussing the convergence of distributions, which is a concept in probability theory that deals with the
behavior of a sequence of random variables as the number of variables in the sequence increases. This is different from the cumulative distribution function (CDF), which is a function that maps the
probability of a random variable being less than or equal to a certain value.
In the statement, the limit refers to the behavior of the CDF of the sequence of random variables (F_{X_n}(x)) as the number of variables (n) increases. The CDF of a discrete random variable is a
step function, where the jumps occur at the values of the random variable. So, at the continuity points (x \not= 0), the limit of F_{X_n}(x) as n increases equals the value of F_X(x). This means that
as the number of variables in the sequence increases, the CDF of the sequence approaches the CDF of the original random variable at those points.
You are correct in saying that the graph of F_X(x) is a straight line with a value of 1 at x=0 and a value of 0 at all other points. However, this is for the CDF of a specific random variable, not a
sequence of random variables. The statement is discussing the behavior of the CDF of a sequence of random variables, which may have different CDFs at different points.
In summary, the statement is discussing the convergence of distributions, which is a concept in probability theory that deals with the behavior of a sequence of random variables. The limit mentioned
is referring to the behavior of the CDF of the sequence as the number of variables in the sequence increases, and it does not necessarily equal the CDF of a specific random variable at all points. I
hope this helps clarify the concept for you.
FAQ: Cdf of a discrete random variable and convergence of distributions
What is the CDF of a discrete random variable?
The Cumulative Distribution Function (CDF) of a discrete random variable is a function that maps the probability of the variable taking on a certain value or a value less than or equal to that value.
How is the CDF of a discrete random variable calculated?
The CDF of a discrete random variable is calculated by summing the probabilities of all the outcomes less than or equal to the value of interest.
What is the significance of the CDF in probability and statistics?
The CDF is important in probability and statistics as it allows us to determine the probability of a random variable taking on a certain value or a value less than or equal to that value. It also
helps us to analyze and compare different distributions and their properties.
What is convergence of distributions?
Convergence of distributions refers to the behavior of a sequence of random variables as the number of observations increases. It is the process of determining whether the observed data is
approaching a specific distribution as the sample size increases.
What is the relationship between the CDF of a discrete random variable and convergence of distributions?
The CDF of a discrete random variable is a key component in determining the convergence of distributions. As the sample size increases, the CDF of the observed data should converge to the CDF of the
underlying distribution, indicating that the observed data is following the same distribution as the underlying population. | {"url":"https://www.physicsforums.com/threads/cdf-of-a-discrete-random-variable-and-convergence-of-distributions.667097/","timestamp":"2024-11-09T14:02:49Z","content_type":"text/html","content_length":"97229","record_id":"<urn:uuid:a6678e74-1ab5-4ff4-a13e-f77776567b26>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00266.warc.gz"} |
The FreeMABSys project and the Blad librairies (Monday, 16h10–17h00). The FreeMabSys project aims at developing the FreeMabSys library. This open source systems biology library evolves from the Maple
MabSys package. Models are implemented as chemical reaction systems. They are analyzed by means of symbolic and numeric methods. The FreeMabSys project is supported by the ANR LEDA. In this talk, I
will present our motivations for the project and the functionalities of the open source Blad libraries that are directly related to FreeMabSys. These libraries are currently incorporated to the Maple
computer algebra software. I will try to give some feedback about this experience and to highlight the good and the not-so-good features of Blad.
Parallel programming with Sklml (Thursday, 10h00–10h50). Writing parallel programs is not easy, and debugging them is usually a nightmare. To cope with these difficulties, the skeleton programming
approach uses a set of predefined patterns for parallel computations. The skeletons are higher order functional templates that describe the program underlying parallelism. Sklml is a new framework
for parallel programming that embeds an innovative compositional skeleton algebra into the OCaml language. Thanks to its skeleton algebra, Sklml provides two evaluation regimes to programs: a regular
sequential evaluation (merely used for prototyping and debugging) and a parallel evaluation obtained via a recompilation of the same source program in parallel mode. Sklml was specifically designed
to prove that the sequential and parallel evaluation regimes coincide.
An OpenAxiom Perspective on Pathways towards Dependable Computational Mathematics (Thursday, 17h10–18h00). With multicore processors becoming commodity, how can computational mathematics spend
Moore's dividend? I will present some perspectives based on the development of the OpenAxiom system. I will briefly discuss OpenAxiom's architecture and challenges getting into the concurrency era. I
will present recent results and projects towards dependable computational mathematics.
High Performance Implementation for Change of Ordering of Zero-dimensional Gröbner Bases (Friday, 16h10–17h00). It is well-known that obtaining efficient algorithms for change of ordering of Gröbner
bases of Issac 2011) that takes advantage of the sparsity structure of multiplication matrices appearing during the change of ordering. This sparsity structure arises even when the input polynomial
system is dense. In practice, we obtain an implementation which is able to manipulate 0-dimensional ideals over a prime field of degree greater than
In this talk we report a high performance implementation of this algorithm. Basic elementary steps were recoded to take advantage of new features of recent Intel processors (SSE4 operations).
Moreover, the linear algebra library was upgraded to a multi-threaded implementation. It out- performs by several order of magnitude the Magma/Singular/FGb implementations of FGLM. For instance,
considering the problem of computing a Lex Gröbner basis from an already computed DRL Gröbner basis of a system of 3 random polynomials in 3 variables of degree 19 over
Mickael Gastineau and Jacques Laskar (IMCCE)
Trip (Monday, 14h50–15h40). Trip is a computer algebra system that is devoted to perturbation series computations, and specially adapted to celestial mechanics. Started in 1988 by J. Laskar, as an
upgrade of the special purpose Fortran routines elaborated for the demonstration of the chaotic behavior of the Solar System, Trip is now a mature tool for handling multivariate generalized power
series. Trip contains also a numerical kernel and could be linked to other computer algebra systems through the Scscp protocol. Trip takes advantage of multiples cores available on modern computers
using the work-stealing techniques and efficient memory management. We will present the design of Trip and the techniques used for the parallel computations.
Scilab: What's new? (Tuesday, 16h10–17h00). Scilab is the free numerical computation software for engineering and scientific applications. Originally based on research at INRIA (the French Research
Institute for Computer Science and Automatic Control), Scilab software has become the free reference in numerical computation. Supported by a Consortium made of industrials and academics and by a
strong community of developers, Scilab is used worldwide.
Scilab software is currently used in educational and industrial environments around the world. There are more than 50,000 monthly remote loadings from the Web site of the Consortium. About 80
countries download Scilab. Scilab includes hundreds of mathematical functions with complete on-line help. It has an interpreter and a high level programming language. Scilab language is made for
easily performing linear algebra and matrix computations. It allows making 2-D, 3-D graphics, and animation. Scilab works under Windows XP/Vista/Seven, GNU/Linux, and MacOSX. In the presentation we
will first explain briefly how we manage the process of free software development from Research to business with the recent creation of Scilab Enterprises Company. This process is completely based
upon the strategic orientation of the roadmap of Scilab, mainly towards High Performance Computing and Embedded Systems. So we will present what is new in Scilab today and which are the future
developments, stressing on future major improvement: Scilab 6.
Macaulay2 (Thursday, 14h–14h50). I will discuss aspects of the design of the Macaulay2 user language that might be useful in the Magix project and assess their advantages and disadvantages based on
the experience of our users.
A TeXmacs tutorial (Wednesday, 11h10–12h00). TeXmacs is a free system for editing technical documents available on Windows, MacOS-X and major Unix flavors. It provides a structured view of the
document and allow easy management of various types of contents (text, graphics, mathematics, interactive sessions, etc.) and easy customization via a macro system. In this talk we will demonstrate
the main features of this system, its document format, its customization facilities and its interfacing capabilities to external programs.
Fast Library for Number Theory: Flint (Tuesday, 9h30–10h20). Flint is a C library which offers fast primitives for integer, polynomial and matrix arithmetic. We will discuss some of the recent
technological improvements in Flint and plans for the future. We will give an overview of the various modules available in Flint and discuss our approach to development, comparing the performance to
various other libraries.
Mathemagix Compiler (Thursday, 10h50–11h40). General purpose systems for computer algebra such as Maple and Mathematica incorporate a wide variety of algorithms in different areas and come with
simple interpreted languages for users who want to implement their own algorithms. However, interpreted languages are insufficient in order to achieve high performance, especially in the case of
numeric or symbolic-numeric algorithms. Also, the languages of the Maple and Mathematics systems are only weakly typed, which makes it difficult to implement mathematically complex algorithms in a
robust way.
Since one major objective of the Mathemagix system is to implement reliable numeric algorithms, a high level compiled language is a prerequisite. The design of the Mathemagix compiler has been
inspired by the high level programming style from the Axiom and Aldor systems, as well as the encapsulation of low level features by languages such as C++. Moreover, the compiler has been developed
in such a way that it is easy to take advantage of existing C++ template libraries. In our presentation, we will give an overview of what has been implemented so far and some of the future plans.
Semantic editing with GNU TeXmacs (Friday, 10h50–11h40). Currently, there exists a big gap between formal computer-understandable mathematics and informal mathematics, as written by humans. When
looking more closely, there are two important subproblems: making documents written by humans at least syntactically understandable for computers, and the formal verification of the actual
mathematics in the documents. In our talk, we will focus on the first problem, by discussing some of the new semantic editing features which have been integrated in the GNU TeXmacs mathematical text
To go short, the basic idea is that the user enters formulas in a visually oriented (whence user friendly) manner. In the background, we continuously run a packrat parser, which attempts to convert
(potentially incomplete) formulas into content markup. As long as all formulas remain sufficiently correct, the editor can then both operate on a visual or semantic level, independently of the
low-level representation being used. A related topic, which will also be discussed, is the automatic correction of syntax errors in existing mathematical documents. In particular, the syntax
corrector that we have implemented enables us to upgrade existing documents and test our parsing grammar on various books and papers from different sources.
Mathemagix libraries (Tuesday, 10h50–11h40). We will present the implementations of the elementary operations with polynomials and matrices available in the C++ libraries of Mathemagix. This includes
most of the classical methods for univariate polynomials and series, but also very recent techniques with several variables. Dedicated variants have been designed for numerical types. We will
illustrate some of the possibilities offered for certified numeric computations with balls and intervals. Most of the algorithms can benefit from parallelization and vectorization features now widely
spread in recent platforms.
This work is in collaboration with J. van der Hoeven.
The FreeMABSys project and the MabSys library (Monday, 17h00–17h50). The FreeMABSys project aims at developing the FreeMabSys library. This open source systems biology library evolves from the Maple
MabSys package. Models are implemented as chemical reaction systems. They are analyzed by means of symbolic and numeric methods. The FreeMabSys project is supported by the ANR LEDA. In this talk, I
will present the MabSys package which is written in Maple. I will show how the package can help modelling in Biology (and also for studying dynamical systems with ODE) using approximate and exact
reductions. A software demo will be given.
Taylor models and their applications (Monday, 14h00–14h50). The method of Taylor models provides rigorous enclosures of functions over given domains, where the bulk of the dependency is represented
by a high order multivariate Taylor polynomial, and the remaining error is enclosed by a remainder bound. In this talk, we will discuss how to construct Taylor model arithmetic on computers, which
naturally includes integrations as a part of arithmetic. Computations using Taylor models provide rigorous results, and advantageous features of the method have shown to be able to solve various
practical problems that were unsolvable earlier. The applications start from mere range bounding of functions, leading to sophisticated rigorous global optimization, and especially the fruitful is
the use for rigorous solvers of differential equations.
Geometric computation behind Axel modeler (Friday, 9h30–10h00). We discuss two specific geometric algorithms in Axel modeler, and their connection to Mathemagix. First, a method for approximating
planar semi-algebraic sets, notably arrangements of curves. The method identifies connected components and returns piece-wise linear approximations of them. Second, a framework to compute generalized
Voronoi diagrams, that is applicable to diagrams where the distance from a site is a polynomial function (Apollonius, anisotropic, power diagram etc). Computational needs behind these methods are are
mainly served by two Mathemagix packages: "realroot", for real solving, and "shape", for the geometric algorithms.
Optimizing computer algebra software for data locality and parallelism (Tuesday, 14h00–14h50). Parallel hardware architectures (multicores, graphics processing units, etc.) and computer memory
hierarchies (from processor registers to hard disks via successive cache memories) impose an evolution of in the design of scientific software. Algorithms for scientific computation have often been
designed with algebraic complexity as the main complexity measure and with serial running time as the main performance counter. On modern computers minimizing the number of arithmetic operations is
no longer the primary factor affecting performance. Effective utilization of the underlying architecture (reducing pipeline stalls, increasing the amount of instruction level parallelism, better
utilizing the memory hierarchy) can be much more important.
This talk is an introduction to performance issues related to data locality and parallelism for matrix and polynomial arithmetic operations targeting multicore and GPU implementation. A first part
will be dedicated to data locality independently of parallelism thus using serial algorithms and C programs as illustration. In a second part analyzing and optimizing multithreaded programs on
multicores will be discussed. In the third part, we will switch to GPU specific issues then compare the implementation techniques of both architecture platforms.
Geometric Modeling and Computing with Axel (Wednesday, 10h00–10h50). Axel is an algebraic-geometric modeler which allows to visualize and compute with different types of tridimensional models of
shapes. The main representations include point sets, meshes, rational, bspline, algebraic and semi-algebraic curves and surfaces. It is also a platform which integrates geometric algorithms, through
an open mechanisms of plugins. We will describe its design, its extension mechanism, its main plugins, some of the algorithms involved in the computation with semi-algebraic sets or bsplines, and its
connection with Mathemagix Project.
Exact computations with an arithmetic known to be approximate (Friday, 14h00–14h50). Floating-point (FP) arithmetic was designed as a mere approximation to real arithmetic. And yet, since the
behaviour of each operation is fully specified by the IEEE-754 standard for floating-point arithmetic, FP arithmetic can also be viewed as a mathematical structure on which it is possible to design
algorithms and proofs. We give some examples that show the interest of that point of view.
Marc Pouzet (Univ. Pierre et Marie Curie, ENS)
Some recent developments and extensions of Synchronous Languages (Thursday, 12h00–12h50). Synchronous languages were invented for programming embedded control software. They have been used for the
development of the most critical parts of software in various domains, notably, avionics, power generation, railways, circuits, etc.
The success of the original synchronous languages has spurred the development of new languages, based on the same principles, that increase modularity, expressiveness and that address new
applications like real-time video streaming, latency-insensitive designs, large scale simulations, hybrid (continuous/discrete) systems.
Lucid Synchrone is one of these new languages. It has served as a basis for experiment with several novel extensions including, higher-order functions, type systems for the clock calculus,
hierarchical state machines with shared variables, signals and new compilation methods. Many of these ideas have been adopted in commercial tools, notably Scade 6. New topics include techniques for
allowing bounded desynchronisation through buffers (
Exact computations and topology of plane curves (Tuesday, 11h40–12h30). We will describe a new algorithm for the computation of the topology of plane curves.
Such problems are usually solved by combining two kinds of algorithms: those using exact computations (resultants, Sturm sequences, etc.) and those using semi-numerical or numerical computations
(approximations of roots, refinements of +isolation boxes).
The global efficiency of the related solver is a balance between symbolic computations (slow but giving a lot of certified informations) and numerical computations (fast but eventually not accurate
or not certified) and is also a balance between the use of asymptotically fast basic operations and more classical ones.
In this lecture, we will present a new global algorithm and fully certified algorithm, including a new bivariate solver, and we will explain our algorithmic choices (exact vs semi-numerical) as well
as some of our technical choices for the implementation (basic algorithms, external libraries, multi-thread variants).
RAGlib: The Real Algebraic Geometry Library (Tuesday 14h50–15h40). Most of algorithmic fundamental questions in real algebraic geometry can be solved by the so-called Cylindrical Algebraic
Decomposition algorithm whose complexity is doubly exponential in the number of variables. During the last 20 years, tremendeous efforts have led to algorithms based on the so-called critical point
method yielding much better complexity results. RAGlib is an attempt to put into practice these more recent techniques by implementing algorithms sharing some of the geometric ideas who have led to
theoretical complexity breakthroughs.
After a historical perspective, algorithms will be reviewed and some applications that have been solved using RAGlib will be presented. Efficiency questions will also be discussed and new
functionalities which will appear in the next release will be presented.
Anatomy of Singular (Thursday, 14h50–15h40). I will present Singular and the parts it is composed of: memory management, factorization, Groebner bases/algorithm of Mora, interpreter, etc. The design
decisions taken by the Singular group, their problems and their advantages will be discussed.
We are just restructuring the code which converts these parts into seperate libraries which may be used individually or together. Although this is "work in progress" this set of libraries may already
be useful.
Accelerating lattice reduction with floating-point arithmetic (Tuesday, 17h00–17h50). Computations on Euclidean lattices arise in a broad range of fields of computer science and mathematics. A very
common task consists in transforming an arbitrary basis of given lattice into a “nicer-looking” one, made of somewhat orthogonal vectors. This reduction process, whose most famous instance is LLL,
heavily relies on Gram-Schmidt Orthogonalisations. One can significantly accelerate lattice reduction by replacing the rational arithmetic traditionally used for the underlying GSO computations by
approximate floating-point arithmetic. In this talk, I will elaborate on this mixed numeric-algebraic approach, which has been implemented in the fpLLL library.
The cornerstone of lattice algorithmics is the famous LLL reduction algorithm, which, despite being polynomial-time, remains somewhat slow. It can be significantly speeded up by replacing the exact
rational arithmetic used for the underlying Gram-Schmidt computations, by approximate floating-point arithmetic.
A natural means of speeding up an algorithm consists in working on an approximation of the input with smaller bit-size and showing that the work performed on the approximation is relevant for the
actual exact input. Unfortunately, a lattice basis that is close to an LLL-reduced basis may not be itself LLL-reduced.
Cado-nfs: An implementation of the Number Field Sieve (Friday, 17h00–17h50). The Number Field Sieve is the leading algorithm for factoring large integers, and has been used over the past 20 years to
establish many records in this area. The latest of these records is the factorization or rsa768 in 2010 by a joint effort of several teams worldwide. We discuss in this talk several algorithmic
aspects of NFS, which are necessary features of a state-of-the art implementation like Cado-nfs. These different optimizations have all contributed greatly to the success of the rsa768 factorization.
Éric Walter (Supelec, Univ. Paris-Sud)
Interval Analysis for Guaranteed Set Estimation (Monday, 11h40–12h30). Interval analysis (IA) makes it possible to make proven statements about sets, based on approximate, floating-point
computations. It may thus be possible to prove that the set of all solutions of a given problem is empty, or a singleton, or to bracket it between computable inner and outer approximations. The first
part of this talk will present examples of problems in which one is interested in sets rather than in point solutions. These examples are taken from robotics, robust control and compartmental
modeling (widely used in biology). The basics of IA will be recalled in a second part. Algorithms will be briefly presented that can be used to find all solutions of sets of nonlinear equations or
inequalities, or all optimal estimates of unknown parameters of models based on experimental data. These models may be defined by uncertain nonlinear ordinary differential equations for which no
closed form solution is available. The third and last part of the talk will be a return on the introductory examples, to see what can be achieved and what cannot, what are the advantages and
limitations of IA compared to alternative technics and what are the challenges that IA must face to become part of the standard engineering tools.
Stephen Watt (Univ. of Western Ontario)
What Can We Learn from Aldor? (Thursday, 16h20–17h10). It has now been two decades since the work on Aldor began. From its beginnings at IBM Research, through its further development by the Numerical
Algorithms Group and its use in research projects in Europe and North America, Aldor has explored the design space for programming languages in mathematical computing. Many of the ideas seen in Aldor
have subsequently been adopted by more main-stream programming languages, while others have not. In this talk we give a brief overview of Aldor and what we see as the most significant ideas it
introduced. We go on to reflect on what worked well, what didn't, and why.
The Mathematics of Mathematical Handwriting Recognition (Friday, 11h00–11h40). Accurate computer recognition of handwritten mathematics offers to provide a natural interface for mathematical
computing, document creation and collaboration. Mathematical handwriting, however, provides a number of challenges beyond what is required for the recognition of handwritten natural languages. On one
hand, it is usual to use symbols from a range of different alphabets and there are many similar-looking symbols. Mathematical notation is two-dimensional and size and placement information is
important. Additionally, there is no fixed vocabulary of mathematical “words” that can be used to disambiguate symbol sequences. On the other hand there are some simplifications. For example, symbols
do tend to be well-segmented. With these characteristics, new methods of character recognition are important for accurate handwritten mathematics input.
We present a geometric theory that we have found useful for recognizing mathematical symbols. Characters are represented as parametric curves approximated by certain truncated orthogonal series. This
maps symbols to a low-dimensional vector space of series coefficients in which the Euclidean distance is closely related to the variational integral between two curves. This can be used to find
similar symbols very efficiently. We describe some properties of mathematical handwriting data sets when mapped into this space and compare classification methods and their confidence measures. We
also show how, by choosing the functional basis appropriately, the series coefficients can be computed in real-time, as the symbol is being written and, by using integral invariant functions,
orientation-independent recognition is achieved. The beauty of this theory is that a single, coherent view provides several related geometric techniques that give a high recognition rate and that do
not rely on peculiarities of the symbol set.
A Foundation of Computable Analysis, the Representation Approach (Monday, 10h50–11h40). Computable Analysis studies all aspects of computability and complexity over real-valued data. In 1955 A.
Grzegorczyk and D. Lacombe proposed a definition of computable real functions. Their idea became the basis of the "representation approach" for computability in Analysis, TTE (Type-2 Theory of
Effectivity). TTE supplies a uniform method for defining natural computability on a variety of spaces considered in Analysis such as Euclidean space, spaces of continuous real functions, open, closed
or compact subsets of Euclidean space, computable metric spaces, spaces of integrable functions, spaces of probability measures, Sobolev spaces and spaces of distributions. There are various other
approaches for studying computability in Analysis, but for this purpose, still TTE seems to be the most useful one.
In TTE, computability of functions on the set of infinite sequences over a finite alphabet is defined explicitly. Then infinite sequences are used as “names” for “abstract” objects such as real
numbers, continuous real functions etc. A function on the abstract objects is called computable, if it can be realized by a computable function on names.
In the talk some basic concepts of TTE are presented and illustrated by examples. Contents: Aims of computable analysis, The representation approach (realization), Computable topological spaces,
Representations of subset spaces, Computable metric spaces, Computational complexity, Final remarks.
Gnu Mpfr: back to the future (Friday, 14h50–15h40). The first public version of Mpfr was released in February 2000. Gnu Mpfr was developed after some long discussions with specialists of
floating-point arithmetic and arbitrary precision, in particular Jean-Michel Muller and Joris van der Hoeven. This talk will focus on the history of Mpfr and on the design decisions we took. We will
also detail some of the recent developments, and guess what Mpfr could be in 2022.
© 2011 Joris van der Hoeven
This webpage is part of the MaGiX project. Verbatim copying and distribution of it is permitted in any medium, provided this notice is preserved. For more information or questions, please contact
Joris van der Hoeven. | {"url":"http://magix.lix.polytechnique.fr/magix/magixalix/magixalix-abstracts.en.html","timestamp":"2024-11-04T15:28:34Z","content_type":"application/xhtml+xml","content_length":"50319","record_id":"<urn:uuid:81b0815f-6ee0-4965-a3eb-f675224ce8e2>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00353.warc.gz"} |
For current course information, please visit the Registrar's website.
Enter AST for the Subject and select which semester you would like to view.
Undergraduate course listings (100-400 level); Graduate courses (500 and above)
This course, whose subject matter covers the entire universe, targets the frontiers of modern astrophysics. Topics include the planets of our solar system; the search
AST 203 for extrasolar planets and extraterrestrial life and intelligence; the birth, life, and death of stars; black holes; the zoo of galaxies and their evolution; the Big
The Universe Bang and the expanding universe; and dark matter, dark energy, and the large-scale structure of the universe. This course is designed for the non-science major and has
no prerequisites past high school algebra and geometry. High school physics would be useful, but is not required. Offered every spring.
AST 204 The solar system and planets around other stars; the structure and evolution of stars; supernovae, neutron stars, and black holes; gravitational waves; the interstellar
Topics in Modern Astronomy matter; the formation and structure of galaxies; cosmology, dark matter, dark energy, and the history of the entire universe. Compared to AST 203, this course employs
more mathematics and physics. Intended for quantitatively-oriented students. Offered every spring.
This is an introductory course in astronomy focusing on planets in our Solar System, and around other stars (exoplanets). First we review the formation, evolution and
AST 205 properties of the Solar system. Following an introduction to stars, we then discuss the exciting new field of exoplanets; discovery methods, earth-like planets, and
Planets in the Universe extraterrestrial life. Core values of the course are quantitative analysis and hands-on experience, including telescopic observations. This SEN course is designed for
the non-science major and has no prerequisites past high school algebra and geometry. Offered every fall.
Black holes are amazing: so much mass is contained in such a small region of space that nothing, not even light, can escape. In this class, we will learn to understand
AST 206 / PHY 206 what black holes are, and (equally importantly) what they are not (sorry, science fiction!). We will grapple with the seeming simplicity of black holes and their
Black Holes weirdness. We will also study how black holes are discovered and how they give rise to some of the most astonishing phenomena in the Universe. We will cover concepts at
the forefront of modern astronomy and physics and highlight the power of quantitative thinking (algebra only) and the scientific method. Offered every spring.
AST 250 The Space Physics Laboratory course sequence provides undergraduates at all levels the opportunity to participate in a laboratory developing NASA space flight
Space Physics Laboratory I instrumentation. The courses teach space physics laboratory skills, including ultrahigh vacuum, space instrument cleanroom, mechanical, electrical, and other laboratory
(Non-credit) skills, which then allow students to propose and carry out a significant group research project in the Laboratory. The sequence comprises two semesters with AST 250 as
a prerequisite for AST 251, a credit bearing (P/F) course. Offered every fall.
The Space Physics Laboratory course provides undergraduates at all levels the opportunity to participate in a laboratory developing NASA space flight instrumentation.
AST 251 The courses teach space physics laboratory skills, including ultrahigh vacuum, space instrument cleanroom, mechanical, electrical, and other laboratory skills, which
Space Physics Laboratory II then allow students to propose and carry out a significant group research project in the Laboratory. The class sequence comprises two semesters with AST 250 as a
prerequisite for AST 251, a credit bearing (P/F) course. Offered every spring.
This course introduces students to a new field, Astrobiology, where scientists trained in biology, chemistry, astrophysics and geology combine their skills to
AST 255 / CHM 255 / GEO 255 investigate life's origins and to seek extraterrestrial life. Topics include: the origin of life on earth,the prospects of life on Mars, Europa, Titan, Enceladues and
Life in the Universe extra-solar planets, as well as the cosmological setting for life and the prospects for SETI. 255 is the core course for the planets and life certificate. Offered every
other fall, odd years.
AST 301 / PHY 321 An introduction to general relativity and its astrophysical applications, including black holes, cosmological expansion, and gravitational waves. Offered every other
General Relativity fall, odd years.
AST 303 How do we observe and model the universe? We discuss the wide range of observational tools available to the modern astronomer: from space-based gamma ray telescopes, to
Deciphering the Universe: globe-spanning radio interferometry, to optical telescopes and particle detectors. We review basic statistics and introduce students to techniques used in analysis and
Research Methods in interpretation of modern data sets containing millions of galaxies, quasars and stars, as well as the numerical methods used by theoretical astrophysicists to model
Astrophysics these data. The course is problem-set-based and aims to provide students with tools needed for independent research in astrophysics. Offered every other fall, even
AST 309 / MAE 309 / PHY 309 / Power from the nucleus offers a low-carbon source of electricity. Fission power is well developed, but carries risks associated with safety, waste, and nuclear weapons
ENE 309 proliferation. Fusion energy research, which presents less such risk, is making important scientific progress and progress towards commercialization. We will study the
The Science of Fission and scientific underpinnings of both of these energy sources, strengthening your physical insight and exercising your mathematical and computational skills. We will also
Fusion Energy ask ourselves the thorny ethical questions scientists should confront as they contribute to the development of new technologies. Offered every spring.
GEO 320 / AST 320 / PHY 320 What makes Earth habitable? How have we unraveled the mysteries of planetary interiors? Using a physics-centered approach, we'll explore a range of captivating subjects
Introduction to Earth and in earth and planetary science, including the origin of solar systems, tectonic plates, mantle convection, earthquakes, and volcanoes. You will learn methods to study
Planetary Physics the inner structures and dynamics of planets, not just Earth, but also celestial neighbors like Mars, Venus, Mercury, the Moon, and even exoplanets. Offered every
AST 401 / PHY 401 A general review of extragalactic astronomy and cosmology. Topics include the properties and nature of galaxies, clusters of galaxies, superclusters, the large-scale
Cosmology structure of the universe, evidence for the existence of Dark Matter and Dark Energy, the expanding Universe, the early Universe, Microwave Background radiation,
Einstein Equations, Inflation, and the formation and evolution of structure. Offered every other spring, even years.
Stars form from interstellar gas, and eventually return material to the interstellar medium (ISM). Nuclear fusion powers stars, and is also the main energy source in
AST 403 / PHY 402 the ISM. This course discusses the structure and evolution of the ISM and of stars. Topics include: physical properties and methods for studying ionized, atomic, and
Stars and Star Formation molecular gas in the ISM; dynamics of magnetized gas flows and turbulence; gravitational collapse and star formation; the structure of stellar interiors; production of
energy by nucleosynthesis; stellar evolution and end states; the effects of stars on the interstellar environment. Offered every other spring, odd years.
The course provides an introduction to modern statistics and data analysis. It addresses the question, "What should I do if these are my data and this is what I want to
SML 505 / AST 505 know"? The course adopts a model based, largely Bayesian, approach. It introduces the computational means and software packages to explore data and infer underlying
Modern Statistics parameters from them. An emphasis will be put on streamlining model specification and evaluation by leveraging probabilistic programming frameworks. The topics are
exemplified by real-world applications drawn from across the sciences.
APC 524 / MAE 506 / AST 506 The goal of this course is to teach basic tools and principles of writing good code, in the context of scientific computing. Specific topics include an overview of
Software Engineering for relevant compiled and interpreted languages, build tools and source managers, design patterns, design of interfaces, debugging and testing, profiling and improving
Scientific Computing performance, portability, and an introduction to parallel computing in both shared memory and distributed memory environments. The focus is on writing code that is easy
to maintain and share with others. Students will develop these skills through a series of programming assignments and a group project.
AST 513 Review of hamiltonian mechanics and potential theory. Planetary systems: current surveys and statistics; keplerian elements; restricted 3-body problem; disturbing
Dynamics of Stellar and functions; secular approximations; resonance; tidal effects. Stellar systems: collisionless equilibira and stability; spiral density waves; dynamical frictions and
Planetary Systems dynamical relaxation; structure of the Galaxy; current surveys; the Galactic Center.
AST 514 Theoretical and numerical analysis of the structure of stars and their evolution. Topics include a survey of the physical process important for stellar interiors
Structure of the Stars (equation of state, nuclear reactions, transport phenomena); and the integrated properties of stars and their evolution.
The astrophysics of the interstellar medium: theory and observations of the gas, dust, plasma, energetic particles, magnetic field, and electromagnetic radiation in
AST 517 interstellar space. Emphasis is on theory, including elements of: fluid dynamics; excitation of atoms, molecules, and ions; radiative processes; radiative transfer; and
Diffuse Matter in Space physical properties of dust grains. The theory is applied to phenomena including: interstellar clouds (both diffuse atomic clouds and dense molecular clouds); H II
regions; shock waves; supernova remnants; cosmic rays; interstellar dust; star formation; and global equilibrium models for the ISM.
AST 520 Selected astrophysical applications of electrodynamics, special and general relativity, nuclear and particle physics. Topics may include synchrotron radiation,
High Energy Astrophysics comptonization, orbits and accretion in black-hole metrics, radio sources, cosmic rays, and neutrino astrophysics.
AST 521 Introductory course to plasma physics, as it applies to space and astrophysical systems. Fundamental concepts are developed with mathematical rigor, and application to
Introduction to Plasma the physics of a wide variety of astrophysical systems are made. Topics include magnetohydrodynamics, kinetic theory, waves, instabilities, and turbulence. Applications
Astrophysics to the physics of the solar wind and corona, the intracluster medium of galaxy clusters, the interstellar medium of galaxies, and a wide variety of accretion flows are
AST 522 This course is an overview of cosmology and extragalactic astronomy at the graduate level, with an emphasis on the connection between theoretical ideas and
Extragalactic Astronomy observational data. The Big Bang model and the standard cosmological model are emphasized, as well as the properties and evolution of galaxies, quasars, and the
intergalactic medium.
APC 523 / AST 523 / MAE 507 / A broad introduction to numerical algorithms used in scientific computing. The course begins with a review of the basic principles of numerical analysis, including
CSE 523 sources of error, stability, and convergence. The theory and implementation of techniques for linear and nonlinear systems of equations and ordinary and partial
Numerical Algorithms for differential equations are covered in detail. Examples of the application of these methods to problems in engineering and the sciences permeate the course material.
Scientific Computing Issues related to the implementation of efficient algorithms on modern high-performance computing systems are discussed.
AST 541 Designed to stimulate students in the pursuit of research. Participants in this seminar discuss critically papers given by seminar members. Ordinarily, several staff
Seminar in Theoretical members also participate. Often topics are drawn from published data that present unsolved puzzles of interpretation.
AST 542
Seminar in Observational Students will present talks and discussion on select topics in Astrophysics and Cosmology.
Astrophysics: Current Research
Topics in Astrophysics
An introductory course to plasma physics, with sample applications in fusion, space and astrophysics, semiconductor etching, microwave generation, plasma propulsion,
AST 551 / MAE 525 high power laser propagation in plasma; characterization of the plasma state, Debye shielding, plasma and cyclotron frequencies, collision rates and mean-free paths,
General Plasma Physics I atomic processes, adiabatic invariance, orbit theory, magnetic confinement of single-charged particles, two-fluid description, magnetohydrodynamic waves and
instabilities, heat flow, diffusion, kinetic description, and Landau damping. The course may be taken by undergraduates with permission of the instructor.
AST 552 This is an introductory graduate course in plasma physics, focusing on magnetohydrodynamics (MHD) and its extension to weakly collisional or collisionless plasmas.
General Plasma Physics II Topics to be covered include: the equations of MHD and extended MHD, the structure of magnetic fields, static and rotating MHD equilibria and their stability, magnetic
reconnection, MHD turbulence, and the dynamo effect. Applications are drawn from fusion, heliophysical, and astrophysical plasmas.
AST 553 Hydrodynamic and kinetic models of nonmagnetized and magnetized plasma dispersion; basic plasma waves and their applications; basic instabilities; mechanisms of
Plasma Waves and Instabilities collisionless dissipation; geometrical-optics approximation; conservation laws and transport equations for the wave action, energy, and momentum; mode conversion;
quasilinear theory.
AST 554 Introduction to theory of fluctuations and transport in plasma. Origins of irreversibility. Random walks, Brownian motion, and diffusion; Langevin and Fokker-Planck
Irreversible Processes in theory. Fluctuation-dissipation theorem; test-particle superposition principle. Statistical closure problem. Derivation of kinetic equations from BBGKY hierarchy and
Plasmas Klimontovich formalism; properties of plasma collision operators. Classical transport coefficients in magnetized plasmas; Onsager symmetry. Introduction to plasma
turbulence, including quasilinear theory. Applications to current problems in plasma research.
AST 555 Introduction to experimental plasma physics, with emphasis on high-temperature plasmas for fusion. Requirements for fusion plasmas: confinement, beta, power and
Fusion Plasmas & Plasma particle exhaust. Discussion of tokamak fusion and alternative magnetic and inertial confinement systems. Status of experimental understanding: what we know and how we
Diagnostics know it. Key plasma diagnostic techniques: magnetic measurements, Langmuir probes, microwave techniques, spectroscopic techniques, electron cyclotron emission, Thomson
APC 503 / AST 557 Asymptotic methods, Dominant balance, ODEs: initial and Boundary value problems, Wronskian, Green's functions, Complex Variables: Cauchy's theorem, Taylor and Laurent
Analytical Techniques in expansions, Approximate Solution of Differential Equations, singularity type, Series expansions. Asymptotic Expansions. Stationary Phase, Saddle Points, Stokes
Differential Equations phenomena. WKB Theory: Stokes constants, Airy function, Derivation of Heading's rules, bound states, barrier transmission. Asymptotic evaluation of integrals, Laplace's
method, Stirling approximation, Integral representations, Gamma function, Riemann zeta function. Boundary Layer problems, Multiple Scale Analysis
AST 558 Advances in experimental and theoretical studies or laboratory and naturally-occurring high-temperature plasmas, including stability and transport, nonlinear dynamics
Seminar in Plasma Physics and turbulence, magnetic reconnection, selfheating of "burning" plasmas, and innovative concepts for advanced fusion systems. Advances in plasma applications, including
laser-plasma interactions, nonneutral plasmas, high-intensity accelerators, plasma propulsion, plasma processing, and coherent electromagnetic wave generation.
AST 559 / APC 539 A comprehensive introduction to the theory of nonlinear phenomena in fluids and plasmas, with emphasis on turbulence and transport. Experimental phenomenology;
Turbulence and Nonlinear fundamental equations, including Navier-Stokes, Vlasov, and gyrokinetic; numerical simulation techniques, including pseudo-spectral and particle-in-cell methods;
Processes in Fluids and coherent structures; transition to turbulence; statistical closures, including the wave kinetic equation and direct-interaction approximation; PDF methods and
Plasmas intermittency; variational techniques. Applications from neutral fluids, fusion plasmas, and astrophysics.
AST 560 Analysis of methods for the numerical solution of the partial differential equations of plasma physics, including those of elliptic, parabolic, hyperbolic, and
Computational Methods in eigenvalue type. Topics include finite difference, finite element, spectral, particle-in-cell, Monte Carlo, moving grid, and multiple-time-scale techniques, applied to
Plasma Physics the problems of plasma equilibrium, transport and stability. Basic parallel programming concepts are discussed.
Develop skills, knowledge, and understanding of basic and advanced laboratory techniques used to measure the properties and behavior of plasmas. Representative
AST 562 experiments are: cold-cathode plasma formation and architecture; ambipolar diffusion in afterglow plasmas; Langmuir probe measurements of electron temperature and
Laboratory in Plasma Physics plasma density; period doubling and transitions to chaos in glow discharges; optical spectroscopy for species identification; microwave interferometry and cavity
resonances for plasma density determination; and momentum generated by a plasma thruster.
MAE 522/ AST 564
Applications of Quantum An intermediate-level course in applications of quantum mechanics to modern spectroscopy. The course begins with an introduction to quantum mechanics as a "tool" for
Mechanics to Spectroscopy and atomic and molecular spectroscopy, followed by a study of atomic and molecular spectra, radiative, and collisional transitions, with the final chapters dedicated to
Lasers plasma and flame spectroscopic and laser diagnostics. Prerequisite: one semester of quantum mechanics.
Focus of this course is on fundamental processes in plasma thrusters for spacecraft propulsion with emphasis on recent research findings. Start with a review of the
MAE 528/ AST 566 - Physics of fundamentals of mass, momentum & energy transport in collisional plasmas, wall effects, & collective (wave) effects, & derive a generalized Ohm's law useful for
Plasma Propulsion discussing various plasma thruster concepts. Move to detailed discussions of the acceleration & dissipation mechanisms in Hall thrusters, magnetoplasmadynamic
thrusters, pulsed plasma thrusters, & inductive plasma thrusters, & derive expressions for the propulsive efficiencies of each of these concepts.
AST 568
Introduction to Classical and The first half of this course intends to provide students with a systematic development of the fundamentals of gyrokinetic (GK) theory, and the second half provides
Neoclassical Transport and students with an introduction to transport and confinement in magnetically confined plasmas. | {"url":"https://web.astro.princeton.edu/academic-programs/graduate-program/courses","timestamp":"2024-11-11T00:01:08Z","content_type":"text/html","content_length":"92848","record_id":"<urn:uuid:d10386ad-0b2a-4687-b0b0-fefeae9e64c8>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00563.warc.gz"} |
Transactions Online
Tamami MARUYAMA, Toshikazu HORI, "Vector Evaluated GA-ICT for Novel Optimum Design Method of Arbitrarily Arranged Wire Grid Model Antenna and Application of GA-ICT to Sector-Antenna Downsizing
Problem" in IEICE TRANSACTIONS on Communications, vol. E84-B, no. 11, pp. 3014-3022, November 2001, doi: .
Abstract: This paper proposes the Vector Evaluated GA-ICT (VEGA-ICT), a novel design method that employs the Genetic Algorithm (GA) to obtain the optimum antenna design. GA-ICT incorporates an
arbitrary wire-grid model antenna to derive the optimum solution without any basic structure or limitation on the number of elements by merely optimizing an objective function. GA-ICT comprises the
GA and an analysis method, the Improved Circuit Theory (ICT), with the following characteristics. (1) To achieve optimization of an arbitrary wire-grid model antenna without a basic antenna
structure, the unknowns of the ICT are directly assigned to variables of the GA in the GA-ICT. (2) To achieve a variable number of elements, duplicate elements generated by using the same feasible
region are deleted in the ICT. (3) To satisfy all complex design conditions, the GA-ICT generates an objective function using a weighting function generated based on electrical characteristics,
antenna configuration, and size. (4) To overcome the difficulty of convergence caused by the nonlinearity of each term in the objective function, GA-ICT adopts a vector evaluation method. In this
paper, the novel GA-ICT method is applied to downsize sector antennas. The calculation region in GA-ICT is reduced by adopting cylindrical coordinates and a periodic imaging structure. The GA-ICT
achieves a 30% reduction in size compared to the previously reported small sector antenna, MS-MPYA, while retaining almost the same characteristics.
URL: https://global.ieice.org/en_transactions/communications/10.1587/e84-b_11_3014/_p
author={Tamami MARUYAMA, Toshikazu HORI, },
journal={IEICE TRANSACTIONS on Communications},
title={Vector Evaluated GA-ICT for Novel Optimum Design Method of Arbitrarily Arranged Wire Grid Model Antenna and Application of GA-ICT to Sector-Antenna Downsizing Problem},
abstract={This paper proposes the Vector Evaluated GA-ICT (VEGA-ICT), a novel design method that employs the Genetic Algorithm (GA) to obtain the optimum antenna design. GA-ICT incorporates an
arbitrary wire-grid model antenna to derive the optimum solution without any basic structure or limitation on the number of elements by merely optimizing an objective function. GA-ICT comprises the
GA and an analysis method, the Improved Circuit Theory (ICT), with the following characteristics. (1) To achieve optimization of an arbitrary wire-grid model antenna without a basic antenna
structure, the unknowns of the ICT are directly assigned to variables of the GA in the GA-ICT. (2) To achieve a variable number of elements, duplicate elements generated by using the same feasible
region are deleted in the ICT. (3) To satisfy all complex design conditions, the GA-ICT generates an objective function using a weighting function generated based on electrical characteristics,
antenna configuration, and size. (4) To overcome the difficulty of convergence caused by the nonlinearity of each term in the objective function, GA-ICT adopts a vector evaluation method. In this
paper, the novel GA-ICT method is applied to downsize sector antennas. The calculation region in GA-ICT is reduced by adopting cylindrical coordinates and a periodic imaging structure. The GA-ICT
achieves a 30% reduction in size compared to the previously reported small sector antenna, MS-MPYA, while retaining almost the same characteristics.},
TY - JOUR
TI - Vector Evaluated GA-ICT for Novel Optimum Design Method of Arbitrarily Arranged Wire Grid Model Antenna and Application of GA-ICT to Sector-Antenna Downsizing Problem
T2 - IEICE TRANSACTIONS on Communications
SP - 3014
EP - 3022
AU - Tamami MARUYAMA
AU - Toshikazu HORI
PY - 2001
DO -
JO - IEICE TRANSACTIONS on Communications
SN -
VL - E84-B
IS - 11
JA - IEICE TRANSACTIONS on Communications
Y1 - November 2001
AB - This paper proposes the Vector Evaluated GA-ICT (VEGA-ICT), a novel design method that employs the Genetic Algorithm (GA) to obtain the optimum antenna design. GA-ICT incorporates an arbitrary
wire-grid model antenna to derive the optimum solution without any basic structure or limitation on the number of elements by merely optimizing an objective function. GA-ICT comprises the GA and an
analysis method, the Improved Circuit Theory (ICT), with the following characteristics. (1) To achieve optimization of an arbitrary wire-grid model antenna without a basic antenna structure, the
unknowns of the ICT are directly assigned to variables of the GA in the GA-ICT. (2) To achieve a variable number of elements, duplicate elements generated by using the same feasible region are
deleted in the ICT. (3) To satisfy all complex design conditions, the GA-ICT generates an objective function using a weighting function generated based on electrical characteristics, antenna
configuration, and size. (4) To overcome the difficulty of convergence caused by the nonlinearity of each term in the objective function, GA-ICT adopts a vector evaluation method. In this paper, the
novel GA-ICT method is applied to downsize sector antennas. The calculation region in GA-ICT is reduced by adopting cylindrical coordinates and a periodic imaging structure. The GA-ICT achieves a 30%
reduction in size compared to the previously reported small sector antenna, MS-MPYA, while retaining almost the same characteristics.
ER - | {"url":"https://global.ieice.org/en_transactions/communications/10.1587/e84-b_11_3014/_p","timestamp":"2024-11-08T20:14:08Z","content_type":"text/html","content_length":"65480","record_id":"<urn:uuid:a641b2df-3833-4a75-a7a7-993552a97350>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00552.warc.gz"} |
MiRKAT Package Vignette
Simulate time to event data
We again use Charlson’s throat microbiome data to demonstrate the use of MiRKAT-S. Data loading and preparation are the same as in the previous section. Because the original dataset has a binary
phenotype (smoking) rather than a measure of censored time to event outcomes, we consider smoking status and gender as covariates and generate null outcome data from the Exponential distribution.
Specifically, we generate survival times as \(S \sim \text{Exponential}(1 + I(\text{smoke}) + I(\text{male}))\), and censoring times as \(C \sim \text{Exponential}(0.75)\). Then the observed outcome
measures are observation time \(T = \text{min}(S, C)\) and an indicator variable for whether the event was observed, \(\Delta = I(S \leq C)\). That is, when delta = 1, the corresponding “obstime” is
the survival time, and when delta = 0, the corresponding observation is censored and “obstime” is the time of censoring. This simulation procedure results in approximately 33% censoring.
# Simulate outcomes
# Here, outcome is associated with covariates but unassociated with microbiota
# Approximately 33% censoring
SurvTime <- rexp(60, (1 + Smoker + Male))
CensTime <- rexp(60, 0.75)
Delta <- as.numeric(SurvTime <= CensTime )
ObsTime <- pmin(SurvTime, CensTime)
The p-value for the test may be generated using permutation or Davies’ exact method. Davies’ exact method, which computes the p-value based on a mixture of chi-square distributions, is used when
“perm = F”. We use a small-sample correction to account for the modest sample sizes and sparse OTU count matrices that often result from studies of the microbiome.
# Davies' exact method
MiRKATS(obstime = ObsTime, delta = Delta, X = cbind(Smoker, Male, anti), Ks = Ks,
perm = F, omnibus = "cauchy", returnKRV = T, returnR2 = T)
## $p_values
## K.weighted K.unweighted K.BC
## 0.4687006 0.3658690 0.2334753
## $omnibus_p
## [1] 0.339411
## $KRV
## K.weighted K.unweighted K.BC
## 0.0005903327 0.0006214442 0.0002618715
## $R2
## K.weighted K.unweighted K.BC
## 0.0007363894 0.0064688465 0.0042391164
Using “perm = T” indicates that a permutation p-value should be calculated for each kernel-specific test. Overall, permutation is recommended when the sample size is small, as Davies’ method may be
slightly anti-conservative with very small sample sizes. MiRKAT-S will generate a warning when permutation is not used for sample sizes \(n \leq 50\). “nperm” indicates the number of permutations to
perform to generate the p-value (default = 1000).
# Permutation
MiRKATS(obstime = ObsTime, delta = Delta, X = cbind(Smoker, Male, anti), Ks = Ks,
perm = T, omnibus = "cauchy", returnKRV = T, returnR2 = T)
## $p_values
## K.weighted K.unweighted K.BC
## 0.5765766 0.4154154 0.3003003
## $omnibus_p
## [1] 0.4218266
## $KRV
## K.weighted K.unweighted K.BC
## 0.0005903327 0.0006214442 0.0002618715
## $R2
## K.weighted K.unweighted K.BC
## 0.0007363894 0.0064688465 0.0042391164
As above, the omnibus p-value may be generated either using residual permutation to create a null distribution of minimum p-values, and the minimum p-value from the original MiRKAT-S analysis is
tested against this distribution as the test statistic of the omnibus test, or using the Cauchy combination test. The permutation-based omnibus test is described further at https://github.com/hk1785/
OMiSA (Koh 2018, DOI: https://doi.org/10.1186/s12864-018-4599-8). | {"url":"https://cran.usk.ac.id/web/packages/MiRKAT/vignettes/MiRKAT_Vignette.html","timestamp":"2024-11-03T12:37:38Z","content_type":"text/html","content_length":"65314","record_id":"<urn:uuid:6c16725f-f395-41a6-86ee-0c8b17059734>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00345.warc.gz"} |
Excel Tutorials for Beginners
Free Excel training for middle and high school students
The following Excel tutorial videos were created specifically for students who want to learn how to use a spreadsheet for a report, science project, or presentation. If you have never used a
spreadsheet before, start with the introduction. Watching all of the videos will take about 25 minutes. See the Google Sheets Tutorials page if you want to use Google Sheets instead of Excel.
1. Introduction to Spreadsheets (3:20)
View Transcript
A spreadsheet lets you organize data and calculations.
My friend Sarah has been practicing her typing speed, and she's given me some data on her timed typing tests: how many words she typed, and how many minutes she had to type them in.
In a spreadsheet, you always put each piece of data in its own cell, so I'll put the labels in different cells in the top row.
In Row 2, I'll pick one of her typing tests and record the number of words she typed, in cell A2, and how long it took, in cell B2. I'll put the data from each of her tests in a new row, until
they're all in the spreadsheet.
My goal will be to calculate her typing speed in words per minute for each test.
Before I do the calculations, I'd like to style my data. I'll add a cell border so it's easier to see where my data is.
I can style the headers myself by making them bold or changing a font, or I can use a preset setting.
Either way, it's nice to draw attention to what I measured.
You can see some of my labels are too long for the cell, so I can resize them by hovering between two columns until I get a double-sided arrow, then I can click and drag to change the column's width.
You can see that words are aligned on the left side of the cell, and numbers are all pushed to the right.
If I want, I can put them all on either side or in the middle.
Now I'm ready to calculate the words per minute for each typing test.
I can start any calculation in a spreadsheet by typing the equals sign, then I just click on the cells I want to use; so the first words per minute calculation will be "= A2/B2".
I can copy this formula and paste it into all the other cells in the table.
Then, if I click one of those cells, I can see which numbers it is using in its calculation.
Some of my results have decimals and some don't. I think it looks cleaner for all of them to have the same number of decimal places.
I can select the values and specify that they are numbers. Then I can decrease to only show 1 decimal place.
Now that I have the words per minute for each of Sarah's tests, I'd also like to find her average words per minute.
I could calculate this by adding up each of her words per minute results and dividing by the total number of tests she took.
If I do that, I'll want to be sure to include parentheses because the order of operations always applies.
To make things simpler, I can use a function, by typing = Average() , and selecting the cells I want to average out.
I can use functions to find Sarah's minimum and maximum typing rates as well.
So that's an intro on some of the functionality you can use with spreadsheets. They give you a lot of great options.
Check out our videos for tips and tricks to go a little deeper.
2. Choosing a Chart Type (1:54)
So you've made a hypothesis and finished an experiment? Exciting! So now how do you share your results?
Scientists love graphs. But which kind of graph is best for your experiment? ...
... view rest of transcript
There's some cool graph options out there. Let's see which one's best for you.
First, think about what kind of information you have. What are your inputs and your outputs?
There's 3 kinds of input. It could be time, like if you measured the temperature of water every 5 minutes.
There's numerical, like if you measured the height of a catapult projectile at different distances away from the catapult.
And there's categories, like if you measure the pH of different kinds of liquids.
Once you know which kind of input you have, there's just one more question to ask yourself.
Is your output--what you actually measured--a number, like the temperature of water or the pH of a liquid?
Or is it more like a tally of responses, like how many people preferred one scenario over another?
Take a minute to think about your data from your experiment, then pick the chart type that matches it.
If your input is time, a line graph is a great choice.
If your input is numerical, check out an XY scatter plot.
If your input is different categories, try using a bar graph, column chart, or pictograph.
And if you have survey results or you want to show the percent of a whole, look at using a Pie chart.
Finally, if you want to show how survey results or the percent of a whole changed over time, check out a time-based area chart.
Once you know which chart type would showcase your results well, check out the video that walks you through how to create that chart yourself.
3. Create a Line Chart (4:32)
Download Line Chart Example
A line chart shows a trend of how values change over time.
It takes 3 steps to create a line chart.
1. Organize the data
2. Create a chart
3. Add labels and style it
Before we make the chart, we'll put our data into Excel. So how do we organize it so it can turn into a chart? ...
... view rest of transcript
We start with the title. We want to be as descriptive as possible.
Imagine someone is going to find your chart, without any explanation of what it's about. Help them figure out what they're looking at.
Then we'll organize our data into columns.
The first column holds the Date or Time values, which will be listed across the X axis of your chart.
Then, you'll put each set of data you recorded in its own column to the right of the X values.
For example, I recorded the daily high and low temperature, and I'd like one line to show the high, and one line to show the low.
It's very important to label your data with the units you measured in. For mine, are these degrees Fahrenheit or Celsius?
If you measured a distance, was it in inches, centimeters, or thousands of miles?
Just showing numbers by themselves isn't as easy to understand as if you include their units.
Now that our columns are labeled, we just fill in all the data. The times should match up with the data we recorded at those times.
Once our data is in the spreadsheet, we're ready to make the line graph!
This is easy. We select our data.
Click Insert > Line Graph > Line with Markers.
Voila! We have a line chart.
It even gave us a legend with our labels.
Now, we add a chart title and axis titles.
We can link the chart title to the one we already wrote.
Select the box the title is in.
In the formula bar type "=A1"
Now we'll add a label for the X axis, and link it to the label we wrote.
In the formula bar, type "=A2"
For the Y axis, if we have more than one line we need a generic label to show what the numbers on the side mean.
For mine, I can say Temperature (F).
Yours might be Plant Height (cm).
Our last step is to make the chart look nice.
For some charts, you don't need the Y axis to start at 0.
Like for temperature, we can zoom in on the data I measured, instead of squishing it all to the top.
If that works for your data, we'll select the axis and change the minimum.
At this point, our chart is looking really good.
We can easily add more lines.
Just insert a new column and add the data.
Excel automatically adds a line with those values to the chart.
Now for my favorite part.
We can style our chart with different colors and designs by selecting the chart > Design, pick settings that we like.
For more specific changes to the lines on your chart, you can go to the paint bucket, officially the "Fill and Line" menu.
Here, you can change the color, size, and other details for each line individually.
Now we have a great-looking line graph.
It shows what we measured, the units we recorded them in, and how the values changed over time.
If you want a head start on creating your own line graph, you can download the Excel file I used for this video.
4. Create a Column or Bar Chart (4:44)
Download Column Chart Example
A column chart compares one measurement for different things or categories.
For example, you could compare the average lifespan of different kinds of animals, the pH of different liquids, or the GDP of different countries.
Excel makes it easy to create a column chart ...
... view rest of transcript
You'll label 2 columns: the first is the kind of categories you have, and the second is what you measured for those categories.
It's very important to label your data with the units you measured in. For mine, the rainfall is measured in inches, but I have to label it or someone could think it was centimeters.
Next, you'll type your categories in the first column, and put their values in the second column.
Once your data is in the spreadsheet, you are ready to make the column chart!
This is easy. We select our data.
Then all we have to do is go to the Insert tab, find the Column Chart option, and choose Clustered Column.
If you want to do a bar chart instead, you can select that here--the only difference is that your bars go side-to-side instead of up and down.
You will automatically get a title, but you can change it if you'd like.
The only thing you have to add is labels for the axes. You already wrote the label of the X axis, so you can link to it by going to the formula bar and typing equals, then selecting the cell with the
name of your categories.
For the Y axis, you can either link to the value you wrote before, or you can type a different description of your measurement.
Either way, be sure to include the units.
You can display more than one value for each category.
If you've already created a chart, the easiest way to add data is to type the category title and all of the values in the next column.
Then, you can select the chart; that will highlight the data it's using.
Find the lower-right corner of the highlighted area, and drag it to include your new data.
Then you'll want to check that your Chart Title and Axis Title are still correct.
If you have more than one column for each category, you should add a legend to you chart, to make it clear what each column represents.
You can change your chart type at any time, if you decide a bar chart would be better than a column chart for your data.
Select the chart, then go to Design, Change chart Type. Then you'll find Bar on the side menu.
Now our chart has all the information it needs--everything else I'll show you is styling.
You can adjust how many decimal places the Y axis shows by selecting the axis and going to Axis Options > Number. Select Number from the dropdown list, and type the number of decimal places you'd
like to display.
You can also specify the maximum value of your Y axis--just be sure that it's high enough to show all your data.
You can style the chart with different colors and designs by selecting the chart, go to Design, and pick from a good selection of pre-set styles.
One especially cool feature of column charts is that you can use a picture instead of a solid color.
Select one of your column series, then go to the bucket fill and line menu. Choose Picture or Texture fill.
Find a picture you'd like to use, and then change it from Stretch to Stack. You can add different pictures for each of the column series.
Technically, this kind of chart is called a pictograph, and you can make it from either a column chart or a bar chart.
So that's how you make a column chart!
If you want a head start on creating your own column chart, you can download the Excel file I used for this video.
5. Create an XY Scatter Plot (6:06)
Download XY Scatter Plot Example
A scatter graph or scatter plot can help you visualize how two numerical values are related. One dot represents both measurements for a single instance.
For example, you could graph the size of different trees in a forest. Each dot on this graph represents one tree, and shows its circumference and its height ...
... view rest of transcript
Looking at it, I can see that the trees that are bigger around than other trees are also taller than other trees.
Scatter graphs can compare any 2 numbers--you could look at people's height vs weight, city size vs population, or the amount of time students studied vs their grade on a test.
Whatever you want to show on the scatter graph, you start with putting your data in Excel.
You'll label 2 columns: the first will be your X axis, and the second will be your Y axis.
If you had any control over one of the variables, you'll want to put that one first.
For example, if you gave different amounts of water to each tree, that's an "independent" variable you controlled, and should go across the bottom of your graph, and in the first column of your data.
If you didn't have anything to do with either measurement, you can pick either one for this first column.
It's very important to label your data with the units you measured in.
For mine, the circumference and height were both measured in meters, but I have to label it or someone might assume I measured in feet.
Once your data is in the spreadsheet, you are ready to make the scatter chart!
This is easy. First, you will select your data.
Then go to the Insert tab, find the XY Scatter Chart option, and choose Scatter.
The first thing to do is add a title. You want to describe both of your measurements, usually in the format of "Y Measurement" vs "X Measurement", with units at the end.
Next, you'll add labels for the axes.
If you want to use the same wording you used when you labeled your data, you can link to that text by going to the formula bar and typing equals, then selecting the cell that has your data label.
If you would like to change the wording, you can select the Axis Title and type a different description of your measurement; be sure to include the units here as well.
At this point, you have a nice looking graph that has the data and the labels. Let's take a look at how you would display more than one set of data on your chart.
There's 2 different ways you could add data to your chart, and it depends on whether the new data uses the same X values you already recorded, or if it has different X values.
Let's first look at adding data with the same X values.
Suppose that for each Circumference of my trees, I knew the average height of Douglas Firs in the area, and wanted to include those values on my chart.
I'd record these values in a new column to the right of my data, so for the 0.30 meter circumference tree, the average height is 8.40 meters, and so on.
Now I can select the graph and find where my data is highlighted.
I'll find the lower right corner of the highlighting, then drag it to include my new data.
You can see it gets added as a new dataset with its own color.
I would want to add a legend to my chart to make it clear which dataset was which, and I might need to update my Title and Axis labels.
Now, let's look at how we would add data if the X values are not the same.
Suppose I went to a different forest and measured the circumference and height of trees there.
I'd record the data in a different area of my sheet, including both the X and the Y values.
I need to be sure that the headers of my first dataset apply to my new dataset--so for mine, I'd record circumference in the first column and height in the second column, and since the units for my
first dataset were measured in meters. I'd need to record the new data in meters as well.
Once you have your second dataset in the workbook, you select the chart, and on the Design tab go to "select data."
You will add a series and name it--whatever you want to appear in the legend.
Then you'll select the X values and the Y values of your dataset.
When you're done, you may also want to update the name of your original series.
Again, you'd want to add a legend to your chart. You may need to update your Title and Axis labels.
You can show a trendline of each dataset.
If you select a series, right click, and choose Add Trendline, it will add a straight line that shows the general trend of your data.
If your data looks more like a curve than a straight line, some of the other options might be a better match.
When you've found the trendline type you want to use, you can check the box to display the trendline's equation on your chart--this may be especially helpful for future analysis of your data.
You can style the chart with different colors and designs by selecting the chart, go to Design, and pick from a selection of pre-set styles.
If you want a head start on creating your own scatter graph, you are welcome to download the Excel file I used for this video, and then add your own data to it.
6. Create a Pie Chart (4:28)
Download Pie Chart Example
A pie chart shows how different categories make up a whole.
For example, you could survey students about their favorite animal, and make a pie graph of the results ...
... view rest of transcript
Out of the students in this survey, you can see that more than half prefer horses, and about a third prefer dolphins.
The remaining students had a lot of different favorite animals.
Pie charts are best when you'd like to emphasize one especially big value or one especially small value.
This chart would be great for showing that a lot of students like horses, or for calling out that my hypothesis was wrong, if I thought that most students would prefer dogs.
It's important to know that pie charts are not always the right answer for making comparisons.
If your numbers are closer together, like 25%, 30%, and 40%, it's hard to tell from a pie chart which values are bigger.
A column chart is usually a better option, since it's easier to compare similar values, and it's easier to read when you have lots of categories.
If you do want to make a pie chart, you'll label 2 columns: the first is the kind of categories you have, and the second is what you measured in those categories.
Next, you'll fill in the categories in the first column, and their values in the second column.
Once your data is in the spreadsheet, you are ready to make the pie chart!
This is easy. You select your data. Then all you have to do is go to the Insert tab, find the pie chart option, and choose pie.
Now you're going to update your title.
You want it to be descriptive enough that if someone found your chart without any explanation, they'd know what they were looking at.
If you've already created a chart and you want to add another slice to your pie, the easiest way to do that is to type the missing category and number at the bottom of your dataset.
Remember that if you're using percentages, they need to add up to 100%, so be sure your data is still accurate after you add a new value.
Then, you can select the chart; that will highlight the data it's using.
Find the lower-right corner of the highlighted area, and drag it to include your new data.
You can change your chart type at any time, if you decide a column chart would be better than a pie chart for your data, especially as you start adding more categories.
Select the chart, then go to Design, Change Chart Type. Then you'll find Column on the side menu.
You can preview other chart options to experiment with as well.
Now your chart has all the information it needs--everything else I'll show you is styling.
You can style the chart with different colors and designs by selecting the chart, go to Design, and pick from a good selection of pre-set styles.
Now you can do more customization on top of the styling.
You can add labels by right-clicking the chart and choosing add data callouts.
On the label options section, you can change the separator to a space so it doesn't take up as much vertical room.
You may want to emphasize one slice of the pie chart more than the others.
If so, you can select the slice, and the side bar will say "Format Data Point".
Then under series options, you can increase the point explosion to pull it out from the rest of the pie.
On the fill and line bucket menu, you can change the colors of the whole pie, and you can change one slice at a time.
So that's how to create a pie chart!
If you want a head start on creating your own chart, you can look in the description for a link to download this file.
7. Printing a Chart in Excel (1:57)
This video shows various ways to print a chart in Excel, including how to copy a chart from Excel into a report in Word.
After you have made a chart and you are ready to print it, you have a few different options to customize what you print and how big it is. ...
... view rest of transcript
There's the basic print, which will print everything exactly as you see it in Excel--except of course without the gridlines that separate each cell.
This is the default if you have a cell selected when you print.
If you select a chart before you print, Excel will only print the chart--and it will automatically fit the chart onto one page.
If you want to print a specific part of your worksheet, then you can select the cells you want to print.
And then make sure that under the print settings you are printing the selection.
If you want Excel to remember where your print selection was so you can re-print the same selection later, you can save it as a print area.
You will go to Page Layout > Print Area > set print area.
If you are printing for a poster, you may want everything to print larger than it shows on your screen.
In that case, you can use custom scaling to increase the size of everything you print.
One note is you can't do that if you select a chart first, so you'll have to either print a selection or a range in order to change the custom scaling.
If you're not going to print straight from Excel, and instead you're going to copy your chart into a Word report or a Power Point presentation, I would recommend pasting it as a picture.
This will keep all the formatting and numbers the same, even if you update your Excel file later on.
So that's how to print a chart!
Basic Spreadsheet Terminology
Though versions of Excel look different, there are some common features and common words used to describe those features. You need to know some of these words to understand some of the instructions
given in various Excel tutorials.
The picture below shows an Excel 2010 workbook file named My_Workbook.xls. A spreadsheet file is called a workbook. A workbook may contain one or more worksheets, shown as tabs at the bottom of the
Excel window. Each worksheet is made up of cells in a rectangular grid. The rows are numbered. The columns are labeled with letters. A group of cells is called a range.
In the image above, cell C4 is currently selected. The Formula Bar is showing that cell C4 contains a formula, which you can identify because it starts with an equal sign (=). The formula is adding
the values found in cells A4 and A5.
Almost everything else you need to know about Excel is explained in Microsoft's own getting started guide found online or in Excel's Help system.
More Excel Tutorials
• 1 Min 24 Sec - (lock the upper rows or left column from scrolling)
• 2 Min 46 Sec - (to quickly create lists of numbers, month names, etc.)
• 6 Min 10 Sec - (to quickly reformat lists) | {"url":"https://totalsheets.com/edu/excel-tutorials-for-beginners.html","timestamp":"2024-11-07T15:04:56Z","content_type":"text/html","content_length":"53409","record_id":"<urn:uuid:d830f6d3-28eb-4047-b3f3-32fb2c0c533d>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00787.warc.gz"} |
VLOOKUP vs XLOOKUP Function - What's the Difference?
VLOOKUP has long been the benchmark based on which user’s Excel knowledge was judged.
You don’t know Excel if you can’t use VLOOKUP.
Then things improved, and VLOOKUP’s reign came to an end because of a newer and better function—XLOOKUP.
The Excel team considered years of feedback about VLOOKUP limitations, and when they finally released a better version in XLOOKUP, they made sure most of it was sorted.
In this article, I will make a strong case for why XLOOKUP is a much better function (of course) and explain the difference between VLOOKUP and XLOOKUP.
So buckle up as I compare these two functions and get technical.
Click here to download the example file and follow along
Syntax of VLOOKUP and XLOOKUP
Below is the syntax of the VLOOKUP function:
VLOOKUP(lookup_value, table_array, col_index_num, [range_lookup])
• lookup_value – The value you want to search for (the lookup value)
• table_array – The range of cells that contains the data you want to search through. This is the table array
• col_index_num – The column number in the table from which to retrieve the value
• [range_lookup] – A logical value (TRUE or FALSE). TRUE finds an approximate match, while FALSE finds an exact match. If omitted, it defaults to FALSE (which is an approximate match)
And here is the syntax of the XLOOKUP function:
XLOOKUP(lookup_value, lookup_array, return_array, [if_not_found], [match_mode], [search_mode])
• lookup_value – The value you want to search for (lookup value)
• lookup_array – The range of cells where you want to look for the lookup_value.
• return_array – The range of cells from which to return the value.
• [if_not_found] – The value to return if the lookup_value is not found.
• [match_mode] – Specifies the type of match to perform:
□ 0: Exact match (default)
□ -1: Exact match or next smaller item
□ 1: Exact match or next larger item
□ 2: Wildcard match
• [search_mode] – Specifies the search mode:
□ 1: Search from first to last (default)
□ -1: Search from last to first
□ 2: Perform a binary search in ascending order
□ -2: Perform a binary search in descending order
Just by looking at the syntax, you may think VLOOKUP is easier to use. But this is one of the cases where more is actually better. With more arguments, XLOOKUP actually makes it easier to use the
function, and it also gives it much-needed flexibility, which VLOOKUP lacks.
We will see how this plays out in the next section, where I will compare VLOOKUP and XLOOKUP using specific examples.
VLOOKUP vs. XLOOKUP – Differences
Let’s understand the difference between VLOOKUP and XLOOKUP by looking at some examples.
VLOOKUP Uses Harcoded Column Numbers to Return Value from, XLOOKUP Uses an Array
When using the lookup, you need to specify the exact column number from which you want to extract the result.
With XLOOKUP, there is no need for column counting as you can specify the lookup_array and return_array separately.
Below, I have a data set where I have employee names, their ID, and their department name in three columns, and I want to fetch the department name for Gloria in column G.
With VLOOKUP, you can do this using the below formula:
And with XLOOKUP, you can do the same with the following formula:
In the VLOOKUP formula, I need to specify the column number from where I want to fetch the value for the matching lookup value.
In this example, since the department name is in the third column of the dataset, I have to specify 3 as the third argument in the VLOOKUP formula.
On the contrary, with XLOOKUP, the lookup array and the return array are two separate arguments, so I don’t need to count columns and specify the one from where I want the result.
Instead, I can select the lookup array and the return array independently.
VLOOKUP Always Looks Up in the Left Most Column (XLOOKUP Doesn’t)
One of the biggest limitations of the VLOOKUP function is that it always searches for the lookup value in the first (left-most) column of your data range (the table_array argument).
This also means that you cannot look up and return a value from the left of the lookup column.
In contrast, with XLOOKUP, you can specify any column for the lookup, not just the first one. This also means that you can look up and return values from the left of the lookup column.
Below, I have a data set where I have employee names, their ID, and their department name in three columns, and I want to fetch the employee name for a given employee ID.
Unfortunately, this is not something you can do with VLOOKUP with the current construct of the data set.
This is because if you select the entire data set, the employee id will not be the leftmost column in the data set. If you select the table array starting from the Employee ID column, then you won’t
be able to return the name as it won’t be a part of the table array in that case.
But this is not a problem for XLOOKUP.
The formula below will easily give me the result:
XLOOKUP Defaults to Exact Match, VLOOKUP Defaults to Approximate Match
Another welcome improvement in the XLOOKUP function is that the match mode argument defaults to an exact match.
In VLOOKUP, if you don’t specify the match mode argument (called [range_lookup]), it defaults to approximate match, which is a less-used use case, and in most cases, users are looking for an exact
Below, I have a data set with students’ names in column A and their scores in column B, and I want to get the score of the student named Joseph in cell E2.
If I use the VLOOKUP function without specifying that I need an exact match, it will default to approximate match, giving me the wrong result.
As you can see, the above VLOOKUP formula gives me a score of 72, while the actual result should be 68.
To get the right result, I will have to use the below VLOOKUP formula, where I need to specify the exact match mode (by using FALSE or 0 as the fourth argument).
Since XLOOKUP defaults to an exact match (in case the match mode argument is not specified), I can use the below XLOOKUP formula to get the right result:
If you want to use an approximate match in XLOOKUP, you can specify the match mode separately (it’s the fifth argument).
Also read: VLOOKUP Vs. INDEX/MATCH – Which One is Better? (Answered)
XLOOKUP Can Lookup Values from Bottom to Top
When using VLOOKUP, it scans the lookup column starting from top to bottom and returns the corresponding value as soon as it finds a match.
In XLOOKUP, you can specify the direction of the search – which can be from first to last or last to first. If omitted, it would default to the commonly used first-to-last search (i.e., top to bottom
in vertical lookup and left to right in horizontal lookup).
Below, I have a data set where I have department names in column A and their employee names in column B. I want to know the name of the last employee tagged as part of the marketing department.
While I cannot do this using the Vlookup formula, it can easily be done using the following XLOOKUP formula:
The above formula gives me the result as Minnie, who is the last employee name for the Marketing department in the list.
XLOOKUP Can Return Values From Multiple Columns
Since XLOOKUP is available in Excel versions that also have dynamic arrays, you can use it to return multiple lookup values from different columns.
VLOOKUP, on the other hand, is designed only to return one value in the standard format. While you can hack the formula to give you more than one result, you will find XLOOKUP to be a lot easier in
such situations.
Below, I have a dataset where I have employee names, their employee ID, and their department names in three separate columns. I want to extract the employee id and their department name for the name
in cell E2.
Let’s see how to do this using VLOOKUP.
I can do this using two separate formulas:
So I have entered one formula that gives me the value from column 2 and then the other formula that gives me the value from column 3.
And, if you have access to dynamic arrays, you can also use the formula below:
With XLOOKUP, you can do the same thing with the following formula:
With XLOOKUP, I can specify the return array as a multiple-column range, and it will return the results from all the columns for the matching lookup value.
Also read: How to Use VLOOKUP with Multiple Criteria
XLOOKUP Can Handle Situations with Missing Values
Another welcome improvement in the XLOOKUP function is that it has an argument that allows you to specify what it should give you in case it doesn’t find the lookup value.
Below, I have a data set where I have employee names in column A and their employee ID in column B, and I want to get the employee ID for the name in cell D2. In case the formula is not able to find
the name, I want it to return “Not Found”
The following formula will do this:
=XLOOKUP(D2,A2:A15,B2:B15,"Not Found")
In the above formula, I have specified “Not Found” as the fourth argument, which would be returned in case the formula is not able to find the lookup value.
If you want to do the same thing with the VLOOKUP function, you will have to use it along with IFERROR or IFNA functions:
=IFNA(VLOOKUP(D2,A2:B15,2,0),"Not Found")
XLOOKUP Approximate Match Doesn’t Need Data to be Sorted
VLOOKUP has two match modes – Exact match and Approximate match.
For the approximate match to work in VLOOKUP, your data needs to be sorted in ascending order.
With the XLOOKUP function, you get two approximate match modes:
• Exact match or the next smaller item
• Exact match or the next larger item
Also, while your data needs to be sorted in ascending order when using the approximate matching in VLOOKUP, there is no need for your data to be sorted when using the approximate matching XLOOKUP.
Below, I have a data set where I have student names in column A and their scores in column B, and I want to get their grades in column C based on the table on the right.
As you can see, the grades table is not sorted in ascending or descending order.
If I use the following VLOOKUP function with this data set, it is going to give me the wrong result.
VLOOKUP Approximate match gives wrong result if the table is not sorted in ascending order
This is understandable as the approximate match in VLOOKUP requires the table to be sorted in ascending order, and our grades table is not sorted.
But XLOOKUP can work with this unsorted table:
In the above XLOOKUP formula, I have used -1 as the fifth argument, which gives the exact match grade or the next smaller grade.
XLOOKUP Has a Wildcard Character Match Mode Option
While VLOOKUP has only two match modes – Exact match and Approximate match, XLOOKUP has the following four match modes:
1. Exact match
2. Exact match or the next smaller item
3. Exact match or the next larger item
4. Wildcard character match
While I’ve already covered the first three match modes in the previous examples, another new one in XLOOKUP is the Wildcard character match.
With VLOOKUP, if you have a wildcard character in the lookup value, it will automatically be considered.
But with XLOOKUP, you need to explicitly specify whether you want the function to use wildcard characters as wildcards or not.
Let me explain with an example.
Below, I have a dataset where I have the student names in column A and their scores in column B, and I want to get the score of the student name in cell D2.
I can use the below VLOOKUP formula to do this:
As you can see, VLOOKUP is programmed to automatically consider wildcard characters (such as asterisk, question mark, or tilde).
But see what happens when I use the below XLOOKUP formula:
This gives me an error, as it is programmed to ignore wildcard characters unless specifically specified.
If I want XLOOKUP to consider the asterisk as a wildcard character, I can use the below formula:
Here, I have used 2 as the fifth argument, which makes XLOOKUP consider wildcard characters as wildcards.
So if you’re in a situation where you do not want your lookup formula to treat wildcard characters as wildcards, you can do that with XLOOKUP but not with VLOOKUP.
Lookup Value Size Limit
The lookup value in VLOOKUP can up to 255 characters long. However, there is no such limit on the lookup value when using XLOOKUP.
But this may not be an issue in most scenarios.
In case you’re working with long lookup values such as text, this could be an issue with VLOOKUP.
VLOOKUP is Faster than XLOOKUP (Surprisingly)
With all the improvements made to XLOOKUP, you can expect the function to be faster than its predecessor, VLOOKUP.
However, based on multiple tests run by different people, it was found that XLOOKUP is slower than VLOOKUP.
One reason behind this could be that because XLOOKUP performs a lot more checks and also has more arguments to handle, it weighs down on the speed.
VLOOKUP tends to lose its speed advantage as the data set grows and more columns are added to the table.
Since VLOOKUP would need to process a lot of data when there are multiple columns (compared to XLOOKUP that can have the table array in return array specified separately), this can lower the speed
gap between the two functions with large datasets with multiple columns.
You can read more about the speed comparison of XLOOKUP and VLOOKUP here.
VLOOKUP is Compatible in All Excel Versions, XLOOKUP in New Versions Only
One obvious disadvantage of the new function is that it is not compatible with the older versions of Excel.
XLOOKUP function is only available in Excel with Microsoft 365.
This means that if you’re working with someone who’s using an older version of Excel, you’ll have to stick to using VLOOKUP (or make them upgrade).
I think this is a temporary issue as Microsoft slowly moves all the Excel users to Microsoft 365, where everyone would have access to all the new functions and functionalities.
In this article, I’ve covered how VLOOKUP an XLOOKUP functions are different from each other, and all the improvements that have been made to the XLOOKUP function.
Difference XLOOKUP VLOOKUP
Column Specification Uses lookup_array and return_array, no need for column counting Requires specifying column number from which to return the value
Lookup Column Position The lookup column doesn’t need to be the leftmost column. Only searches in the left-most column of the table array
Default Match Type Defaults to exact match Defaults to approximate match
Search Direction Can search from first to last or last to first Searches only from first to last (top to bottom)
Multiple Column Return Can return values from multiple columns Designed to return one value, requires multiple functions for multiple columns
Handling Missing Values Can specify a value to return if the lookup value is not found ([if_not_found] argument) Requires using IFERROR or IFNA functions to handle missing values
Approximate Match No need for data to be sorted, supports both next smaller and next larger item match modes Data must be sorted in ascending order for approximate match
Wildcard Characters Supports wildcard character match mode explicitly Considers wildcard characters automatically
Lookup Value Size Limit No limit on lookup value size Lookup value can be up to 255 characters long
Speed Slower than VLOOKUP, especially with larger datasets Generally faster, but speed decreases with larger datasets and more columns
Compatibility Available only in newer versions of Excel with Microsoft 365 Compatible with all versions of Excel
While the VLOOKUP function is still widely used by many Excel users, if you have access to the XLOOKUP function, it would be a good idea to learn and start using it (as it offers many advantages over
its predecessor).
I hope you found this article helpful.
If you have any suggestions or comments for me, please let me know in the comments section.
Other Excel articles you may also find useful: | {"url":"https://trumpexcel.com/excel-functions/vlookup-vs-xlookup/","timestamp":"2024-11-12T15:22:11Z","content_type":"text/html","content_length":"431272","record_id":"<urn:uuid:bb4c80b7-e40e-40c1-88ba-a64f52aa1f4d>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00151.warc.gz"} |
Hearing versus Knowing
From: Aleppo-Syria
Registered: 2018-08-10
Posts: 238
Hearing versus Knowing
Hearing of a set of formulas is not bad and it could be an important first step.
But only knowing when and how these formulas is useful to solve a problem in an easier and/or faster way lets someone be scientific and professional in certain fields.
In fact, the origin of every formula was the need of a simpler and/or faster way to solve a repeated problem in certain applications.
Every living thing has no choice but to execute its pre-programmed instructions embedded in it (known as instincts).
But only a human may have the freedom and ability to oppose his natural robotic nature.
But, by opposing it, such a human becomes no more of this world. | {"url":"https://mathisfunforum.com/viewtopic.php?id=30399","timestamp":"2024-11-11T16:38:22Z","content_type":"application/xhtml+xml","content_length":"7108","record_id":"<urn:uuid:5ff54a3a-5abb-4ecb-95f0-fca8d6432b31>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00683.warc.gz"} |
Isosteric enthalpies for hydrogen adsorbed on nanoporous materials at high pressures
A sound understanding of any sorption system requires an accurate determination of the enthalpy of adsorption. This is a fundamental thermodynamic quantity that can be determined from experimental
sorption data and its correct calculation is extremely important for heat management in adsorptive gas storage applications. It is especially relevant for hydrogen storage, where porous adsorptive
storage is regarded as a competing alternative to more mature storage methods such as liquid hydrogen and compressed gas. Among the most common methods to calculate isosteric enthalpies in the
literature are the virial equation and the Clausius-Clapeyron equation. Both methods have drawbacks, for example, the arbitrary number of terms in the virial equation and the assumption of ideal gas
behaviour in the Clausius-Clapeyron equation. Although some researchers have calculated isosteric enthalpies of adsorption using excess amounts adsorbed, it is arguably more relevant to applications
and may also be more thermodynamically consistent to use absolute amounts adsorbed, since the Gibbs excess is a partition, not a thermodynamic phase. In this paper the isosteric enthalpies of
adsorption are calculated using the virial, Clausius-Clapeyron and Clapeyron equations from hydrogen sorption data for two materials - activated carbon AX-21 and metal-organic framework MIL-101. It
is shown for these two example materials that the Clausius-Clapeyron equation can only be used at low coverage, since hydrogen's behaviour deviates from ideal at high pressures. The use of the virial
equation for isosteric enthalpies is shown to require care, since it is highly dependent on selecting an appropriate number of parameters. A systematic study on the use of different parameters for
the virial was performed and it was shown that, for the AX-21 case, the Clausius-Clapeyron seems to give better approximations to the exact isosteric enthalpies calculated using the Clapeyron
equation than the virial equation with 10 variable parameters.
Dive into the research topics of 'Isosteric enthalpies for hydrogen adsorbed on nanoporous materials at high pressures'. Together they form a unique fingerprint. | {"url":"https://researchportalplus.anu.edu.au/en/publications/isosteric-enthalpies-for-hydrogen-adsorbed-on-nanoporous-material","timestamp":"2024-11-13T06:22:00Z","content_type":"text/html","content_length":"51566","record_id":"<urn:uuid:619300c2-dccc-4f70-8a46-81317667e5ee>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00726.warc.gz"} |
Resistor Color Code Calculator
Introduction to Resistor Color Codes
Resistor color codes are a standardized system used to denote the value of resistors. These codes consist of colored bands that indicate the resistor’s value, tolerance, and reliability. In this
guide, we’ll cover how to decode resistor color codes for 4-band, 5-band, and 6-band resistors.
4-Band Resistor Color Code
The 4-band resistor is one of the most common types. It consists of four color bands that represent the following:
• Band 1: First significant digit.
• Band 2: Second significant digit.
• Band 3: Multiplier (the power of ten).
• Band 4: Tolerance (percentage of allowed variance).
5-Band Resistor Color Code
The 5-band resistor provides more precision and is commonly used in higher accuracy applications. It consists of:
• Band 1: First significant digit.
• Band 2: Second significant digit.
• Band 3: Third significant digit.
• Band 4: Multiplier.
• Band 5: Tolerance.
6-Band Resistor Color Code
The 6-band resistor is similar to the 5-band resistor but with an added band for temperature coefficient. This type of resistor is used where temperature stability is critical:
• Band 1: First significant digit.
• Band 2: Second significant digit.
• Band 3: Third significant digit.
• Band 4: Multiplier.
• Band 5: Tolerance.
• Band 6: Temperature coefficient (ppm/°C).
Understanding resistor color codes is crucial for working with electronics. By knowing the significance of each band, you can easily determine the resistance value, tolerance, and temperature
coefficient of resistors used in your projects. Whether you’re working with 4-band, 5-band, or 6-band resistors, this guide serves as a reference for quick and accurate calculations. | {"url":"https://turn2engineering.com/calculators/resistor-color-code-calculator","timestamp":"2024-11-07T00:39:13Z","content_type":"text/html","content_length":"215941","record_id":"<urn:uuid:a95e8edc-3d30-48c8-97ff-942d5fa5aa7e>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00588.warc.gz"} |
Training Monotony
From Fellrnr.com, Running tips
Training Monotony is not about boredom, but is a way of measuring the similarity of daily training. By calculating a simple number, it's easy to evaluate a training program, and understand its
effectiveness. Training Monotony can be calculated using a spreadsheet or using Runalyze.com or TrainAsOne.com. The calculation is based on each day's training stress, dividing the average by the
standard deviation for each rolling seven day period.
1 Training Monotony and Overtraining
It is long been recognized the athletes cannot train hard every day. Modern training plans recommend a few hard days per week, with the other days as easier or rest days. A lack of variety in
training stress, known as Training Monotony, is considered a key factor in causing Overtraining Syndrome^[1]^[2]. There is also evidence^[3] that increased training frequency results in reduced
performance benefits from identical training sessions as well as increased fatigue.
2 Training Monotony and Supercompensation
Training Monotony is related to Supercompensation and the need for adequate rest to recover from training.
3 Quantifying monotony
One approach^[4] to measuring monotony is statistically analyze the variation in workouts. The first stage is to work out a measure of the daily TRIMP (TRaining IMPulse). From this daily TRIMP it's
possible to calculate the standard deviation for each 7 day period. The relationship between the daily average TRIMP value and the standard deviation can provide a metric for monotony. The monotony
value combined with the overall training level can be used to evaluate the likelihood of Overtraining Syndrome.
4 Monotony Calculations
The original work^[4] on training monotony used TRIMP^cr10 and TRIMP^zone, but I substitute TRIMP^exp for TRIMP^zone because of the advantages noted in TRIMP. (Simply using daily mileage or duration
could be used to get an estimate of Training Monotony.) From the daily TRIMP values for a given 7 day period the standard deviation can be calculated. (If there is more than one workout in a day, the
TRIMP values for each are simply added together.) The monotony can be calculated using
Monotony = average(TRIMP)/stddev(TRIMP)
This gives a value of monotony that tends towards infinity as stddev(TRIMP) tends towards zero, so I cap Monotony to a maximum value of 10. Without this cap, the value tends to be unreasonably
sensitive to high levels of monotony. Values of Monotony over 2.0 are generally considered too high, and values below 1.5 are preferable. A high value for Monotony indicates that the training program
is ineffective. This could be because the athlete is doing a low level of training; an extreme example would be a well-trained runner doing a single easy mile every day. This would allow for complete
recovery, but would not provide the stimulus for improvement and would likely lead to rapid detraining. At the other extreme, doing a hard work out every day would be monotonous and not allow
sufficient time to recover. The Training Strain below can help determine the difference between monotonous training that is inadequate and monotonous training that is excessive.
4.1 Updated Monotony Formula
The formula above Is useful, but Its sensitivity to higher levels of monotony can overwhelm Your training data. This is particularly obvious when using the training strain calculations below. A small
modification results in Training monotony values between 0.29 and 1.0. Here is the updated formula:
Monotony = average(TRIMP)/( stddev(TRIMP) + average(TRIMP) )
When the standard deviation tends toward zero, the monotony value now tends towards 1.0 rather than Infinity. The highest standard deviation in a seven day period is from a single training day,
combined with six days of rest. This results in a monotony of about 0.2899. (Note that you can still get a divide by zero error if there is no training load for the entire week, as both average and
standard deviation are both zero. Treating this as a special case and assuming a training monotony of either 0 or 0.2899 is probably reasonable depending on usage.)
5 Training Strain Calculations
A similar calculation can be used to calculate a value for Training Strain.
Training Strain = sum(TRIMP) * Monotony
The value of Training Strain that leads to actual Overtraining Syndrome would be specific to each athlete. An elite level athlete will be able to train up much higher levels than a beginner. However
this Training Strain provides a better metric of the overall stress that an athlete is undergoing than simply looking at training volume.
6 A simple TRIMP^cr10 based calculator
This calculator will show the TRIMP^cr10 values for each day, the Monotony, the total TRIMP^cr10 for the week and the Training Strain.
7 TRIMP^exp Examples
For these examples we will use just a few simple workouts. Let's assume a male athlete with a Maximum Heart Rate of 180 and a Resting Heart Rate of 40, giving a Heart Rate Reserve of 140. Let's
assume our hypothetical athlete does his easy runs at a 9 min/mile pace and heart rate of 130. We'll use only one of the type of workout, a tempo run his easy runs at a 7 min/mile pace and heart rate
of 160. This gives us some TRIMP^exp values for some workouts.
Miles Duration TRIMP^exp
Easy 4 36 51
Easy 6 54 76
Easy 10 90 127
Easy 20 180 254
Tempo 4 28 80
Tempo 8 56 159
Here is a sample week's workout with three harder workouts, a 4 mile tempo, a 10 mile mid-long run and a 20 mile long run with four mile easy runs on the other days, a total of 50 miles.
Monday Tempo 4 80
Tuesday Easy 4 51
Wednesday Easy 10 127
Thursday Easy 4 51
Friday Easy 4 51
Saturday Easy 20 254
Sunday Easy 4 51
Stdev 70
Avg 95
Total 665
Monotony 1.36
Training Strain 903
If we give our athlete a single day's rest on Sunday, we reduce the mileage by 4 miles to 46 miles, total TRIMP^exp goes down by 51, but the Monotony of drops more significantly to 1.15 and the
Training Strain drops by 199. So the mileage has dropped about 9%, but the Training Strain has dropped by 22%.
Monday Tempo 4 80
Tuesday Easy 4 51
Wednesday Easy 10 127
Thursday Easy 4 51
Friday Easy 4 51
Saturday Easy 20 254
Sunday Rest 0
Stdev 77
Avg 88
Total 614
Monotony 1.15
Training Strain 704
A further rest day on Tuesday drops the Training Strain by a further 21%.
Monday Tempo 4 80
Tuesday Rest 0
Wednesday Easy 10 127
Thursday Easy 4 51
Friday Easy 4 51
Saturday Easy 20 254
Sunday Rest 0
Stdev 82
Avg 80
Total 563
Monotony 0.98
Training Strain 553
If we compare this with an extreme example of a monotonous training plan, we have a slightly lower mileage (46 v 50), and a 57% lower total TRIMP^exp (414 v 927), but the monotony is remarkably high
at 4.7 and the training strain is 2.2x higher. In practice, there would be greater day to day variations, even within the same 6 mile easy run, so the results would not be quite so dramatic.
Monday Easy 6 54
Tuesday Easy 6 54
Wednesday Easy 10 90
Thursday Easy 6 54
Friday Easy 6 54
Saturday Easy 6 54
Sunday Easy 6 54
Stdev 13
Avg 59
Total 414
Monotony 4.69
Training Strain 1,944
8 References | {"url":"https://fellrnr.com/mediawiki/index.php?title=Training_Monotony&ref=nickjstevens.com&printable=yes","timestamp":"2024-11-04T01:34:53Z","content_type":"text/html","content_length":"37997","record_id":"<urn:uuid:df534ff3-6328-4c28-8bf0-717f22730beb>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00313.warc.gz"} |
Whether compressive strength of 100mm cube or 150mm cube is more?Why?
Compressive strength of 100mm cube is greater than the compressive strength of 150mm.
The reason is that..
1) If we mould concrete into cube having a small demension like 100 X 100 X 100 mm^3 the bonding is very good than the 150 mm^3 cube. we know that as the size of the cube or beam increasing the void
spaces in it also increases. If there are voids, the concrete element is unabel to take the load.so, small dimension cube will have more compressive strength than the larger one. and
2) we have a relation stress = force/area.
let us concider two cubes of volume 150mm^3 and 100 mm^3 .apply same amount of force ,say 1KN on each cube.then what will be the stresses induced in each cube...
in 150mm^3 44.444 kn/m^2
in 100 mm^3 100 kn/m^2
100 mm^3 is having high stress means high internal resistance,, means resistance to failure. so it is having more compressive strengh than bigger one. | {"url":"http://mail.aboutcivil.org/answers/3320/whether-compressive-strength-100mm-cube-150mm-cube-more-why","timestamp":"2024-11-06T14:47:14Z","content_type":"text/html","content_length":"46605","record_id":"<urn:uuid:b63411d7-7de1-49b5-bf89-d1ae5f723ea1>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00186.warc.gz"} |
Easy question about System of equations solver
04-29-2016, 07:54 AM
(This post was last modified: 04-29-2016 07:57 AM by PedroML.)
Post: #1
PedroML Posts: 5
Junior Member Joined: Apr 2016
Easy question about System of equations solver
Hello, here a newbie using HP Prime, Thank you for your attention.
As I am using solve() for getting the solutions of a simple polynomical system, it provides 2 pairs of solutions. The question is, how can I store each of them wihtout retyping?.
Maybe the attached screenshot will help to understand the question.
Thank you in advance.
Pedro ML
04-29-2016, 08:01 AM
Post: #2
primer Posts: 135
Member Joined: Sep 2015
RE: Easy question about System of equations solver
you can copy the list to a variable,
and then ask for 1st solution :
and for the 2nd :
04-29-2016, 09:32 AM
Post: #3
DrD Posts: 1,136
Senior Member Joined: Feb 2014
RE: Easy question about System of equations solver
This also works:
L1:=solve({2*x^2+3*y^2 = 1, 2*x+3*y = 1},{x,y})
L1(1) ==> [-0.29 0.53]
L1(2) ==> [ 0.69 -0.13]]
04-29-2016, 09:49 AM
Post: #4
PedroML Posts: 5
Junior Member Joined: Apr 2016
RE: Easy question about System of equations solver
I see, thank you!
As L1 is a list and elements are matrix 1x2, then I need:
and I can use M1(1,1), M1(1,2), M2(1,1), M2(1,2)
Is that right?
04-29-2016, 10:04 AM
(This post was last modified: 04-29-2016 10:39 AM by DrD.)
Post: #5
DrD Posts: 1,136
Senior Member Joined: Feb 2014
RE: Easy question about System of equations solver
(04-29-2016 09:49 AM)PedroML Wrote: I see, thank you!
As L1 is a list and elements are matrix 1x2, then I need:
and I can use M1(1,1), M1(1,2), M2(1,1), M2(1,2)
Is that right?
M1 is a vector in this case, with size(M1) == 2. So elements of the vector are:
M1(1) == -0.29
M1(2) == 0.53
M2(1) == 0.69
M2(2) == -0.13
L1:={[-0.29 0.53], [0.69 -0.13]};
L1(1,2) ==> 0.53 (For example, to reach the second element of the first vector, as a one dimensional array, in a list of vectors).
As a vector, there aren't any rows and columns, its just a one dimensional array. M1(row, col) works for matrices. Lists, vectors, and matrices are a subject that could stand additional clarification
in the user references. Lists and vectors have a lot in common. The solve() command is one particular example. For instance, if you try the examples given in the calc Help, some return lists, and
others return vectors. The Help detail says the command "Returns a
of the solutions..."
04-29-2016, 11:21 AM
Post: #6
PedroML Posts: 5
Junior Member Joined: Apr 2016
RE: Easy question about System of equations solver
Thank you both, got it!!!
So quickly all answers. Great.
User(s) browsing this thread: 1 Guest(s) | {"url":"https://hpmuseum.org/forum/thread-6174-post-55166.html","timestamp":"2024-11-12T12:52:30Z","content_type":"application/xhtml+xml","content_length":"30816","record_id":"<urn:uuid:1356fbb8-be03-483d-a7a0-3f0fbc7f6295>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00394.warc.gz"} |
Algorithm to calculate number of intersecting discs
Given an array A of N integers we draw N discs in a 2D plane, such that i-th disc has center in (0,i) and a radius A[i]. We say that k-th disc and j-th disc intersect, if k-th and j-th discs have at
least one common point.
Write a function
int number_of_disc_intersections(int[] A);
which given an array A describing N discs as explained above, returns the number of pairs of intersecting discs. For example, given N=6 and
A[0] = 1
A[1] = 5
A[2] = 2
A[3] = 1
A[4] = 4
A[5] = 0
there are 11 pairs of intersecting discs:
0th and 1st
0th and 2nd
0th and 4th
1st and 2nd
1st and 3rd
1st and 4th
1st and 5th
2nd and 3rd
2nd and 4th
3rd and 4th
4th and 5th
so the function should return 11. The function should return -1 if the number of intersecting pairs exceeds 10,000,000. The function may assume that N does not exceed 10,000,000.
So you want to find the number of intersections of the intervals [i-A[i], i+A[i]].
Maintain a sorted array (call it X) containing the i-A[i] (also have some extra space which has the value i+A[i] in there).
Now walk the array X, starting at the leftmost interval (i.e smallest i-A[i]).
For the current interval, do a binary search to see where the right end point of the interval (i.e. i+A[i]) will go (called the rank). Now you know that it intersects all the elements to the left.
Increment a counter with the rank and subtract current position (assuming one indexed) as we don't want to double count intervals and self intersections.
O(nlogn) time, O(n) space. | {"url":"https://coderapp.vercel.app/answer/4801275","timestamp":"2024-11-05T16:43:38Z","content_type":"text/html","content_length":"95685","record_id":"<urn:uuid:3041602c-8595-451b-97fe-b66ea735be35>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00497.warc.gz"} |
Higher Dimensions
Understanding higher dimensions helps make sense of spirituality, at least for those of us whose minds have a scientific bent.
Ordinary 3-D space is called 3-D because it takes 3 numbers to locate anything in it, no matter what the reference is. There are 3 directions. And 3 numbers to specify size -- length, width, height.
Adding another dimension to 3-D would mean adding another independent direction. We have trouble visualizing this, and scientists do too, so they think that if there's extra dimensions, they're
curled up so small we can't see them. They don't care if it doesn't make sense; the math still works.
If there are higher dimensions, they include the lower dimensions as part of them. You can't have a 4th dimension all by itself, any more than you can have 3 dimensions without it containing 2
dimensions (surfaces) and 1 dimension (lines). If there are 10 dimensions to reality, they contain all the lower dimensions. There's no isolated tenth dimension.
Scientists have good reason to think that there are higher dimensions because the math says so, and they trust the math. Math doesn't lie. The math of general relativity, which expains gravity and is
universally accepted as truth, requires 4 dimensions. If you add another (fifth) dimension, you are able to include the mathematics of electromagnetism, which also has been fully verified. Add still
more dimensions, and you can include the math of the nuclear forces, which are inside the atom.
In the "string" theory of recent physics, "everything is vibration" is literally true. This theory says that the smallest pieces of physical reality are vibrating strings or loops of energy (whatever
that means). Matter is just vibrating energy, and the frequency and mode of a vibrating string determines what kind of particle it appears to be. The math of string theory predicts the size and mass
of particles, and it all agrees closely with what is seen experimentally. But the math only works if there are 10 spatial dimensions (the vibrations are complicated). You have to allow the strings to
oscillate or wobble in 10 directions.
Science wonders why we can't see higher dimensions. I think our brains are designed to interface to 3-D only. And to perceive 4 dimensions we would need three eyes, and the eye would have a
3-dimensional retina. Still, there's the question of why we can't see "cross-sections" of 4-D shapes in our 3-D space.
In any case, I like to think that the realm of spirit is the higher dimensions, probably beyond the tenth, and that spirit can easily access the lower dimensions, including 3-D. But we in 3-D can't
easily access the higher dimensions. To do so we have to raise our vibration (make it faster) to that of the higher dimensions. That's how the spirituality teachers put it. People who channel
entities in spirit can do this. | {"url":"https://creatorgators.com/blog/dimensions.html","timestamp":"2024-11-07T19:05:59Z","content_type":"application/xhtml+xml","content_length":"5514","record_id":"<urn:uuid:0f119908-c736-451f-8695-8fd982aea4e4>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00021.warc.gz"} |
Astronomical Units to Leagues (statute) Converter
Enter Astronomical Units
Leagues (statute)
β Switch toLeagues (statute) to Astronomical Units Converter
How to use this Astronomical Units to Leagues (statute) Converter π €
Follow these steps to convert given length from the units of Astronomical Units to the units of Leagues (statute).
1. Enter the input Astronomical Units value in the text field.
2. The calculator converts the given Astronomical Units into Leagues (statute) in realtime β using the conversion formula, and displays under the Leagues (statute) label. You do not need to click
any button. If the input changes, Leagues (statute) value is re-calculated, just like that.
3. You may copy the resulting Leagues (statute) value using the Copy button.
4. To view a detailed step by step calculation of the conversion, click on the View Calculation button.
5. You can also reset the input by clicking on button present below the input field.
What is the Formula to convert Astronomical Units to Leagues (statute)?
The formula to convert given length from Astronomical Units to Leagues (statute) is:
Length[(Leagues (statute))] = Length[(Astronomical Units)] / 3.22734676494629e-8
Substitute the given value of length in astronomical units, i.e., Length[(Astronomical Units)] in the above formula and simplify the right-hand side value. The resulting value is the length in
leagues (statute), i.e., Length[(Leagues (statute))].
Calculation will be done after you enter a valid input.
Consider that the average distance from Earth to the Sun is 1 astronomical unit (AU).
Convert this distance from astronomical units to Leagues (statute).
The length in astronomical units is:
Length[(Astronomical Units)] = 1
The formula to convert length from astronomical units to leagues (statute) is:
Length[(Leagues (statute))] = Length[(Astronomical Units)] / 3.22734676494629e-8
Substitute given weight Length[(Astronomical Units)] = 1 in the above formula.
Length[(Leagues (statute))] = 1 / 3.22734676494629e-8
Length[(Leagues (statute))] = 30985204.6536
Final Answer:
Therefore, 1 AU is equal to 30985204.6536 st.league.
The length is 30985204.6536 st.league, in leagues (statute).
Consider that the distance from Earth to Mars at its closest approach is approximately 0.5 astronomical units (AU).
Convert this distance from astronomical units to Leagues (statute).
The length in astronomical units is:
Length[(Astronomical Units)] = 0.5
The formula to convert length from astronomical units to leagues (statute) is:
Length[(Leagues (statute))] = Length[(Astronomical Units)] / 3.22734676494629e-8
Substitute given weight Length[(Astronomical Units)] = 0.5 in the above formula.
Length[(Leagues (statute))] = 0.5 / 3.22734676494629e-8
Length[(Leagues (statute))] = 15492602.3268
Final Answer:
Therefore, 0.5 AU is equal to 15492602.3268 st.league.
The length is 15492602.3268 st.league, in leagues (statute).
Astronomical Units to Leagues (statute) Conversion Table
The following table gives some of the most used conversions from Astronomical Units to Leagues (statute).
Astronomical Units (AU) Leagues (statute) (st.league)
0 AU 0 st.league
1 AU 30985204.6536 st.league
2 AU 61970409.3072 st.league
3 AU 92955613.9608 st.league
4 AU 123940818.6144 st.league
5 AU 154926023.268 st.league
6 AU 185911227.9216 st.league
7 AU 216896432.5752 st.league
8 AU 247881637.2288 st.league
9 AU 278866841.8824 st.league
10 AU 309852046.536 st.league
20 AU 619704093.072 st.league
50 AU 1549260232.6801 st.league
100 AU 3098520465.3602 st.league
1000 AU 30985204653.6016 st.league
10000 AU 309852046536.0164 st.league
100000 AU 3098520465360.164 st.league
Astronomical Units
An astronomical unit (AU) is a unit of length used in astronomy to measure distances within our solar system. One astronomical unit is equivalent to approximately 149,597,870.7 kilometers or about
92,955,807.3 miles.
The astronomical unit is defined as the mean distance between the Earth and the Sun.
Astronomical units are used to express distances between celestial bodies within the solar system, such as the distances between planets and their orbits. They provide a convenient scale for
describing and comparing distances in a way that is more manageable than using kilometers or miles.
Leagues (statute)
A league (statute) is a unit of length used to measure distances. One statute league is equivalent to 3 miles or approximately 4.828 kilometers.
The statute league is defined as three miles, and it was historically used in various English-speaking countries for measuring distances, especially in land navigation and mapping.
Statute leagues are less commonly used today but may still appear in historical documents, literature, and some regional contexts. They provide a way to express distances in a scale larger than miles
but smaller than other large units like leagues nautical.
Frequently Asked Questions (FAQs)
1. What is the formula for converting Astronomical Units to Leagues (statute) in Length?
The formula to convert Astronomical Units to Leagues (statute) in Length is:
Astronomical Units / 3.22734676494629e-8
2. Is this tool free or paid?
This Length conversion tool, which converts Astronomical Units to Leagues (statute), is completely free to use.
3. How do I convert Length from Astronomical Units to Leagues (statute)?
To convert Length from Astronomical Units to Leagues (statute), you can use the following formula:
Astronomical Units / 3.22734676494629e-8
For example, if you have a value in Astronomical Units, you substitute that value in place of Astronomical Units in the above formula, and solve the mathematical expression to get the equivalent
value in Leagues (statute). | {"url":"https://convertonline.org/unit/?convert=astronomical_unit-leagues_statute","timestamp":"2024-11-06T20:11:40Z","content_type":"text/html","content_length":"92708","record_id":"<urn:uuid:c2e1885c-e51b-42ac-9186-97400052e5b2>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00306.warc.gz"} |
Maths: Invented or Discovered?
Image: wix.com Physics is clearly discovered, while languages are invented. The distinction here is clear: one observes nature and the other is a construct of humans. However, things become less
evident when we get to Maths - is it observations of relationships in nature or is it methods made by humans to tackle our problems? This long standing debate hasn’t been resolved in centuries and I
plan on shedding some light to it, and perhaps help you reach a conclusion.
I am, myself, a conventionalist which is a middle-ground of some sorts. Conventionalists argue that mathematics is a human-constructed language used to describe discovered truths. That may sound
intimidating, but I will break it down so you can fully understand the meaning behind that sentence.
We must first start with a simple truth: the terms and methods we use in mathematics are conventions (agreements). For example, the coordinates system - which helps describe relationships between
different points - is a man-made tool. The same goes for our number system; “1”, “2” and “3” are terms created to help us describe what we see.
A similar, second point we must recognise is that truths are context-dependent (strange, right?). In geometry, we could say that all angles in a triangle add up to 180°, and that this is a universal,
eternal, and unchangeable truth - but this is not the case. In non-euclidean geometry (fancy word for geometry in curved surfaces), they can add up to 270°! Therefore, we reach the conclusion that
our conventions must be context-dependent and may vary over time. However, not everything is man-made. These truths we find in nature, and that we observe with mathematics are, as the word implies,
discovered. The Pythagorean formula, a2+b2=c2, is true for all of the right angle triangles (even in curved surfaces!) we find in nature. No human created that to help us tackle a problem, but the
terms “a”, “b”, and “c” were, and help us describe this relationship we find in nature. Hopefully, it now seems evident.
I would like to conclude that we have proven that maths are both created and discovered, and that they are formed by human conventions to describe patterns in nature. Sadly, this is not entirely the
case. We have indeed found convincing evidence for the statement, but it is not that simple: conventionalists often struggle with the fact that, if mathematics are merely conventions, they would lack
a deeper connection to the objective reality of the real world. | {"url":"https://www.runnymede-times.com/post/maths-invented-or-discovered","timestamp":"2024-11-14T17:39:23Z","content_type":"text/html","content_length":"933999","record_id":"<urn:uuid:e9112b0f-c1ed-4ad4-9a19-29e718cfe741>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00360.warc.gz"} |
Free Printable Multiplication Worksheets Pdf Grade 3
Math, specifically multiplication, forms the keystone of numerous scholastic disciplines and real-world applications. Yet, for several students, understanding multiplication can present a difficulty.
To address this obstacle, teachers and moms and dads have actually welcomed a powerful tool: Free Printable Multiplication Worksheets Pdf Grade 3.
Introduction to Free Printable Multiplication Worksheets Pdf Grade 3
Free Printable Multiplication Worksheets Pdf Grade 3
Free Printable Multiplication Worksheets Pdf Grade 3 -
Grade 3 math worksheets on multiplication tables of 2 and 3 Free pdf worksheets from K5 Learning s online reading and math program Our members helped us give away millions of worksheets last year We
provide free educational materials to parents and teachers in over 100 countries If you can please consider purchasing a membership 24
Make an unlimited supply of worksheets for grade 3 multiplication topics including skip counting multiplication tables and missing factors The worksheets can be made in html or PDF format both are
easy to print Below you will find the various worksheet types both in html and PDF format They are randomly generated so unique each time
Value of Multiplication Technique Comprehending multiplication is pivotal, laying a solid foundation for sophisticated mathematical concepts. Free Printable Multiplication Worksheets Pdf Grade 3
provide structured and targeted method, promoting a deeper understanding of this basic math operation.
Advancement of Free Printable Multiplication Worksheets Pdf Grade 3
Multiplication Worksheets 2 Digit By 1 Digit Math Drills DIY Projects Patterns Monograms
Multiplication Worksheets 2 Digit By 1 Digit Math Drills DIY Projects Patterns Monograms
These free printable 3rd Grade multiplication worksheets for parents and teachers are also essential for your child s math skills because they are simple fun and implement a constant practice of math
skills Apart from being one of the basic math skills multiplication is also a strong foundation for more advanced math topics such as division
Grade 5 multiplication worksheets Multiply by 10 100 or 1 000 with missing factors Multiplying in parts distributive property Multiply 1 digit by 3 digit numbers mentally Multiply in columns up to
2x4 digits and 3x3 digits Mixed 4 operations word problems
From traditional pen-and-paper exercises to digitized interactive formats, Free Printable Multiplication Worksheets Pdf Grade 3 have progressed, satisfying varied discovering designs and choices.
Sorts Of Free Printable Multiplication Worksheets Pdf Grade 3
Fundamental Multiplication Sheets Basic exercises concentrating on multiplication tables, helping learners develop a solid arithmetic base.
Word Problem Worksheets
Real-life circumstances incorporated into troubles, boosting vital reasoning and application abilities.
Timed Multiplication Drills Tests designed to enhance rate and precision, aiding in rapid psychological mathematics.
Benefits of Using Free Printable Multiplication Worksheets Pdf Grade 3
Multiplication worksheets For Grade 2 3 20 Sheets pdf Year 2 3 4 Grade 2 3 4 Numeracy
Multiplication worksheets For Grade 2 3 20 Sheets pdf Year 2 3 4 Grade 2 3 4 Numeracy
These worksheets contain simple multiplication word problems Students derive a multiplication equation from the word problem solve the equation by mental multiplication and express the answer in
appropriate units Students should understand the meaning of multiplication before attempting these worksheets Worksheet 1 Worksheet 2 Worksheet
1 Minute Multiplication Interactive Worksheet More Mixed Minute Math Interactive Worksheet Budgeting for a Holiday Meal Worksheet 2 Digit Multiplication Interactive Worksheet Christmas Multiplication
Boosted Mathematical Abilities
Consistent technique sharpens multiplication proficiency, improving general mathematics abilities.
Enhanced Problem-Solving Talents
Word issues in worksheets create logical reasoning and method application.
Self-Paced Discovering Advantages
Worksheets accommodate private knowing rates, promoting a comfortable and adaptable learning environment.
Just How to Develop Engaging Free Printable Multiplication Worksheets Pdf Grade 3
Incorporating Visuals and Shades Vivid visuals and colors catch attention, making worksheets visually appealing and engaging.
Consisting Of Real-Life Scenarios
Connecting multiplication to daily situations adds significance and usefulness to workouts.
Tailoring Worksheets to Various Skill Degrees Personalizing worksheets based on differing efficiency levels ensures inclusive discovering. Interactive and Online Multiplication Resources Digital
Multiplication Tools and Games Technology-based resources provide interactive learning experiences, making multiplication interesting and satisfying. Interactive Web Sites and Applications Online
platforms provide varied and available multiplication method, supplementing traditional worksheets. Personalizing Worksheets for Numerous Understanding Styles Aesthetic Students Aesthetic aids and
layouts aid comprehension for learners inclined toward visual learning. Auditory Learners Verbal multiplication problems or mnemonics deal with students that understand concepts via acoustic ways.
Kinesthetic Students Hands-on activities and manipulatives sustain kinesthetic learners in understanding multiplication. Tips for Effective Implementation in Knowing Consistency in Practice Normal
method strengthens multiplication skills, promoting retention and fluency. Stabilizing Rep and Selection A mix of repeated workouts and varied issue layouts maintains interest and comprehension.
Providing Constructive Comments Feedback help in identifying locations of enhancement, motivating ongoing development. Challenges in Multiplication Practice and Solutions Motivation and Interaction
Difficulties Monotonous drills can lead to uninterest; innovative approaches can reignite motivation. Getting Over Worry of Math Unfavorable assumptions around mathematics can prevent progress;
producing a favorable knowing environment is essential. Influence of Free Printable Multiplication Worksheets Pdf Grade 3 on Academic Efficiency Studies and Research Study Searchings For Research
indicates a positive connection between regular worksheet use and enhanced math performance.
Final thought
Free Printable Multiplication Worksheets Pdf Grade 3 become versatile devices, fostering mathematical effectiveness in students while accommodating varied knowing styles. From fundamental drills to
interactive on the internet resources, these worksheets not just improve multiplication abilities however additionally promote vital thinking and problem-solving abilities.
Printable Multiplication Chart Free PrintableMultiplication
Copy Of Multiplication Table Multiplication Table Multiplication Table printable
Check more of Free Printable Multiplication Worksheets Pdf Grade 3 below
Free Multiplication Worksheets You Can Download Today Grades 3 5 Multiplication worksheets
Multiplication Worksheets For Grade 3 PDF The Multiplication Table
Multiplication Practice Sheets Printable Worksheets Multiplication Worksheets Pdf Grade 234
3x2 Multiplication Worksheets Times Tables Worksheets
Free Multiplication Worksheet 2 Digit And 3 Digit By 1 Digit Free4Classrooms
Printable Multiplication Worksheets X3 PrintableMultiplication
Multiplication worksheets for grade 3 Homeschool Math
Make an unlimited supply of worksheets for grade 3 multiplication topics including skip counting multiplication tables and missing factors The worksheets can be made in html or PDF format both are
easy to print Below you will find the various worksheet types both in html and PDF format They are randomly generated so unique each time
span class result type
3 OA A 3 Use multiplication and division within 100 to solve word problems in situations involving equal groups arrays and measurement quantities 3 OA A 4 Determine the unknown whole number in a
multiplication or division
Make an unlimited supply of worksheets for grade 3 multiplication topics including skip counting multiplication tables and missing factors The worksheets can be made in html or PDF format both are
easy to print Below you will find the various worksheet types both in html and PDF format They are randomly generated so unique each time
3 OA A 3 Use multiplication and division within 100 to solve word problems in situations involving equal groups arrays and measurement quantities 3 OA A 4 Determine the unknown whole number in a
multiplication or division
3x2 Multiplication Worksheets Times Tables Worksheets
Multiplication Worksheets For Grade 3 PDF The Multiplication Table
Free Multiplication Worksheet 2 Digit And 3 Digit By 1 Digit Free4Classrooms
Printable Multiplication Worksheets X3 PrintableMultiplication
Multiplication worksheets PDF To Print
Multiplication Sheets 4th Grade
Multiplication Sheets 4th Grade
Printable Multiplication Table Quiz PrintableMultiplication
Frequently Asked Questions (Frequently Asked Questions).
Are Free Printable Multiplication Worksheets Pdf Grade 3 ideal for all age teams?
Yes, worksheets can be customized to different age and skill degrees, making them adaptable for numerous students.
How typically should pupils exercise using Free Printable Multiplication Worksheets Pdf Grade 3?
Consistent practice is crucial. Normal sessions, ideally a few times a week, can generate significant improvement.
Can worksheets alone enhance mathematics abilities?
Worksheets are an important tool however ought to be supplemented with different discovering approaches for comprehensive ability development.
Are there on-line platforms supplying totally free Free Printable Multiplication Worksheets Pdf Grade 3?
Yes, many academic internet sites supply free access to a vast array of Free Printable Multiplication Worksheets Pdf Grade 3.
How can parents support their youngsters's multiplication practice in the house?
Motivating constant practice, giving aid, and developing a favorable discovering environment are advantageous steps. | {"url":"https://crown-darts.com/en/free-printable-multiplication-worksheets-pdf-grade-3.html","timestamp":"2024-11-12T06:04:00Z","content_type":"text/html","content_length":"29675","record_id":"<urn:uuid:02d7381d-7214-4d36-bdeb-78237c7336ad>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00287.warc.gz"} |
mp_arc 07-238
07-238 David Ruelle
Structure and f-dependence of the a.c.i.m. for a unimodal map f of Misiurewicz type. (385K, pdf) Oct 10, 07
Abstract , Paper (src), View paper (auto. generated pdf), Index of related papers
Abstract. By using a suitable Banach space on which we let the transfer operator act, we make a detailed study of the ergodic theory of a unimodal map $f$ of the interval in the Misiurewicz case.
We show in particular that the absolutely continuous invariant measure $\rho$ can be written as the sum of 1/square root spikes along the critical orbit, plus a continuous background. We conclude
by a discussion of the sense in which the map $f\mapsto\rho$ may be differentiable.
Files: 07-238.src( 07-238.keywords , structure.pdf.mm ) | {"url":"http://kleine.mat.uniroma3.it/mp_arc-bin/mpa?yn=07-238","timestamp":"2024-11-12T09:20:18Z","content_type":"text/html","content_length":"1738","record_id":"<urn:uuid:d76743bb-9b36-4a14-a80e-a50a27e8d3b1>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00452.warc.gz"} |
Lesson 7
Using Graphs to Find Average Rate of Change
Lesson Narrative
Previously, students have characterized how functions are changing qualitatively, by describing them as increasing, staying constant, or decreasing in value. In earlier units and prior to this
course, students have also computed and compared the slopes of line graphs and interpreted them in terms of rates of change. In this lesson, students learn to characterize changes in functions
quantitatively, by using average rates of change.
Students learn that average rate of change can be used to measure how fast a function changes over a given interval. This can be done when we know the input-output pairs that mark the interval of
interest, or by estimating them from a graph.
Attention to units is important in computing or estimating average rates of change, because units give meaning to how much the output quantity changes relative to the input. In thinking carefully
about appropriate units to use, students practice attending to precision (MP6).
Students also engage in aspects of mathematical modeling (MP4) when they use a data set or a graph to compute average rates of change and then use it to analyze a situation or make predictions.
Learning Goals
Teacher Facing
• Given a graph of a function, estimate or calculate the average rate of change over a specified interval.
• Recognize that the slope of a line joining two points on a graph of a function is the average rate of change.
• Understand that the average rate of change describes how fast the output of a function changes relative to the input over the interval.
Student Facing
• Let’s measure how quickly the output of a function changes.
Student Facing
• I understand the meaning of the term “average rate of change.”
• When given a graph of a function, I can estimate or calculate the average rate of change between two points.
CCSS Standards
Building Towards
Glossary Entries
• average rate of change
The average rate of change of a function \(f\) between inputs \(a\) and \(b\) is the change in the outputs divided by the change in the inputs: \(\frac{f(b)-f(a)}{b-a}\). It is the slope of the
line joining \((a,f(a))\) and \((b, f(b))\) on the graph.
Additional Resources
Google Slides For access, consult one of our IM Certified Partners.
PowerPoint Slides For access, consult one of our IM Certified Partners. | {"url":"https://curriculum.illustrativemathematics.org/HS/teachers/1/4/7/preparation.html","timestamp":"2024-11-04T10:59:56Z","content_type":"text/html","content_length":"80432","record_id":"<urn:uuid:bd09e438-ae08-49db-8db1-e2154000f810>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00795.warc.gz"} |
Mathematics 6 (MYP 1) Assessment Tasks
About the Book
This comprehensive collection of assessments for MYP Year 1 provides students with 57 engaging, challenging, and diverse investigative tasks to further develop their mathematical skills and
Created using the MYP Mathematics Framework, Assessment Criteria, Key Concepts, Approaches to Learning, and Global Contexts, these tasks can also be used effectively by schools not enrolled in the IB
Middle Years Programme, as they address all of the common fundamental topics of mathematics at this grade level.
To explore an example from the Assessment Tasks, please click here.
Key features of the Mathematics 6 MYP1 Assessment Tasks publication include:
• Written and developed by experienced MYP mathematics educators,
• Assessment Tasks to address all four MYP Assessment Criteria,
• Task-specific Assessment Criteria descriptors,
• Challenging and diverse assessment and investigative tasks that can be easily adapted to enrich any mathematics curriculum
• Multiple levels of difficulty and length allowing for differentiated assessment
• Interesting and engaging authentic assessment tasks to motivate and encourage students
The Mathematics 6 (MYP 1) Assessment Tasks will encourage students to:
• Check their understanding of mathematical skills and concepts through the MYP Assessment Criteria,
• Discover new but related mathematical concepts on their own, while working through both familiar and unfamiliar mathematics,
• Develop and practice critical thinking and problem-solving skills necessary for MYP criterion-based summative assessments
Designed as a companion digital resource to the Haese Mathematics Mathematics 6 MYP 1 third edition textbook, students and teachers familiar with the Haese MYP textbooks will find these assessments a
seamless and highly rewarding addition to their current learning tools. Furthermore, students who use other resources as their main learning tool will also benefit from these tasks, whether they
complete them as formative or summative assessments, or individual or group activities.
To preserve the integrity of the Assessment Tasks as a true evaluation tool, answers will only be allocated to users with Teacher access via Snowflake. (If you are using this title under other
circumstances and require access to the answers, please contact our team at info@haesemathematics.com)
This product has been developed independently from and is not endorsed by the International Baccalaureate Organization. International Baccalaureate, Baccalaureát International, Bachillerato
Internacional, and IB are registered trademarks owned by the International Baccalaureate Organization.
Year Published: 2023
Page Count: 406
Online ISBN: 978-1-922416-63-6 (9781922416636) | {"url":"https://www.haesemathematics.com/books/mathematics-6-myp-1-assessment-tasks","timestamp":"2024-11-13T12:16:59Z","content_type":"text/html","content_length":"109581","record_id":"<urn:uuid:7ead9d02-46d7-47bf-ad86-355c620146f8>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00068.warc.gz"} |
If the sum of measures of two angles is ${{90}^{\
Hint: To find what are the angles whose sum is ${{90}^{\circ }}$ we will use the definition of types of angles. Firstly we know that ${{90}^{\circ }}$ is a right angle so when two angles are formed
in a ${{90}^{\circ }}$ angle we have to find what are they known as.
Complete step by step answer:
It is given that the sum of two angles is ${{90}^{\circ }}$.
So when the sum of two angles is ${{90}^{\circ }}$ they are known as complementary angles.
As complementary angles are those that when added up give the angle as ${{90}^{\circ }}$ for example two angles measuring 45 degrees each or two angles measuring 30 and 60 degrees each.
Hence if the sum of measures of two angles is ${{90}^{\circ }}$ then the angles are complementary angles.
Note: An angle is a figure formed when two rays meet at a common endpoint. They are represented by the sign $\angle $ and they are measured in degree using a protractor. The two rays joining to form
an angle are known as Arms and the common end point is known as the vertex of the angle. There are many types of angles such as acute angle, Right angle, obtuse angle, Straight angle, Reflex angle
and complete angle. Complementary angle and supplementary angles are also a type of angles but in those we have a pair of angles. Complementary angles are pairs of angles whose sum is ${{90}^{\circ
}}$ and supplementary angles are pairs of angles whose sum is ${{180}^{\circ }}$. Complementary angles always have positive measure and it is composed of two acute angles. | {"url":"https://www.vedantu.com/question-answer/if-the-sum-of-measures-of-two-angles-is-90circ-class-9-maths-cbse-60a26a265b6aeb17f7868c97","timestamp":"2024-11-14T01:44:00Z","content_type":"text/html","content_length":"152677","record_id":"<urn:uuid:6ecf6cad-ad03-432f-a2f8-60a8256e8f42>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00106.warc.gz"} |
A Direct Sum Result for the Information Complexity of Learning
How many bits of information are required to PAC learn a class of hypotheses of VC dimension $d$? The mathematical setting we follow is that of Bassily et al., where the value of interest is the
mutual information $I(S;A(S))$ between the input sample $S$ and the hypothesis outputted by the learning algorithm $A$. We introduce a class of functions of VC dimension $d$ over the domain $X$ with
information complexity at least $Omega dlog log |X|d$ bits for any consistent and proper algorithm (deterministic or random). Bassily et al. proved a similar (but quantitatively weaker) result for
the case $d=1$. The above result is in fact a special case of a more general phenomenon we explore. We define the notion of em information complexity of a given class of functions $. Intuitively, it
is the minimum amount of information that an algorithm for $X$ must retain about its input to ensure consistency and properness. We prove a direct sum result for information complexity in this
context; roughly speaking, the information complexity sums when combining several classes.
Original language Undefined/Unknown
Title of host publication Proceedings of the 31st Conference On Learning Theory
Editors Sébastien Bubeck, Vianney Perchet, Philippe Rigollet
Pages 1547-1568
Number of pages 22
Volume 75
State Published - 1 Jun 2018
Publication series
Name Proceedings of Machine Learning Research
Publisher PMLR | {"url":"https://cris.iucc.ac.il/en/publications/a-direct-sum-result-for-the-information-complexity-of-learning","timestamp":"2024-11-03T23:04:01Z","content_type":"text/html","content_length":"37954","record_id":"<urn:uuid:809cdec7-0015-4ee6-9972-e6a598269a7d>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00331.warc.gz"} |
Extended Formulations and Matroid Polytopes
For this post, I will give a brief introduction to the field of extended formulations. This is a subject that is very much in vogue at the moment, with some outstanding recent breakthroughs that I
will touch upon. There are also a lot of nice matroidal results in this area, a few of which I will mention here, together with some conjectures.
The TSP polytope, TSP$(n) \in \mathbb{R}^{\binom{n}{2}}$ is the convex hull of the set of Hamiltonian cycles of the complete graph $K_n$. The number of facets of TSP$(n)$ grows extremely quickly
with $n$. For example, TSP$(10)$ has more than $50$ billion facets. The central question of extended formulations is: what if we are looking at this problem in the wrong dimension? That is, could
there be a simple polytope in a higher dimensional space that projects down to the TSP polytope? This motivates the notion of extension complexity. Given a polytope $P$, the extension complexity of
$P$, xc$(P)$ is
$$\min \{ \text{number of facets of $Q$: $Q$ projects to $P$}\}.$$
It may seem strange that increasing the dimension (adding more variables) can decrease the number of facets, but the following picture (thanks to Samuel Fiorini for permission to use it) shows that
this is indeed possible.
In the 1980s, Swart [1] attempted to prove that there is a polytope with only polynomial many facets that projects down to the TSP polytope. Note that a polynomial size extended formulation for the
TSP polytope would imply P=NP. The purported linear programs were extremely complicated to analyze. However, in a breakthrough paper, Yannakakis [2] refuted all such attempts by showing that every
symmetric linear program (LP) for the TSP polytope has exponential size. Here symmetric means that each permutation of the cities can be extended to a permutation of the variables without changing
the LP. Since all the proposed LPs of Swart were symmetric, that was the end of the story (for then).
A lingering question however, was whether the symmetry condition was necessary. Yannakakis himself felt that asymmetry should not really help much. However, in 2010, Kaibel, Pashkovich, and Theis
[3] gave examples of polytopes that do not have polynomial size symmetric extended formulations, but that do have polynomial size asymmetric extended formulations. This rekindled interest in the
lingering question. In another breakthrough paper in 2012, Fiorini, Masser, Pokutta, Tiwary, and de Wolf [4] finally proved that the TSP polytope does not admit any polynomial size extended
formulation, symmetric or not. Their proof uses the Factorization Theorem of Yannakakis. That is, the extension complexity of a polytope is actually just the non-negative rank of an associated
matrix (called the slack matrix). To show that this non-negative rank is large for TSP$(n)$, they use a combinatorial lemma due to Razborov [5].
Of course, we can ask about the extension complexity of many other polytopes (including matroid polytopes) that arise in combinatorial optimization. Another famous example is the perfect matching
polytope, PM$(n)$, which is the convex hull of all perfect matchings of the complete graph $K_n$. Note that we can optimize any linear objective function over the perfect matching polytope in
strongly polynomial time via Edmond’s maximum matching algorithm. Thus, it was quite surprising when Rothvoß [6] showed that PM$(n)$ does not admit any polynomial size extended formulation!
Given a matroid $M$, the independence polytope of $M$ is the convex hull of all independent sets of $M$. Rothvoß [7] also proved that there exists a family of matroids such that their corresponding
independence polytopes have extension complexity exponential in their dimension. The fascinating thing about his proof, is that it is purely existential. That is, since it uses a counting argument
for the number of matroids on $n$ elements due to Knuth [8], no explicit family of matroids is known. Indeed, a nice recent observation of Mika Göös is that such an explicit family would imply
the existence of explicit non-monotone circuit depth lower bounds (which no one knows how to do at the moment).
For positive results, Kaibel, Lee, Walter, and Weltge [9] showed that the independence polytopes of regular matroids have polynomial size extended formulations. Their result uses Seymour’s
Decomposition Theorem for regular matroids.
Another class of matroids with small extension complexity are sparsity matroids (which I will now define). Let $G$ be a graph and let $k$ and $\ell$ be integers with $0 \leq \ell \leq 2k-1$. We say
that $G$ is $(k, \ell)$-sparse if for all subsets of edges $F$, $|F| \leq \max \{k|V(F)|-\ell, 0\}$, where $V(F)$ is the set of vertices covered by $F$. $G$ is $(k, \ell)$-tight if it is $(k, \ell)$
–sparse and $|E(G)|=\max \{k|V(G)|-\ell, 0\}$. Consider the subsets of edges $F$ of $G$ such that the graph $(V(G), F)$ is $(k, \ell)$–tight. It turns out that such a collection of subsets is a
matroid. Matroids arising in this way are called $(k, \ell)$-sparsity matroids. Note that graphic matroids are $(1,1)$-sparsity matroids. Iwata, Kamiyama, Katoh, Kijima, and Okamoto [10] recently
proved that sparsity matroids have polynomial size extended formulations.
Both these examples are in some sense close to graphic matroids. It may be that being ‘close’ to graphic is a sufficient condition for having compact extended formulations. Indeed, as far as I know
the following conjectures are all open.
Conjecture 1. The independence polytopes of signed graphic and even cycle matroids both have polynomial size extended formulations.
Conjecture 2. The independence polytopes of a proper minor-closed class of binary matroids have polynomial size extended formulations.
Conjecture 3. The independence polytopes of binary matroids have polynomial size extended formulations.
Conjecture 4. For each finite field $\mathbb{F}$, the independence polytopes of $\mathbb{F}$-representable matroids have polynomial size extended formulations.
[1] E. R. Swart. P = NP. Technical report, University of Guelph, 1986; revision 1987.
[2] M. Yannakakis. Expressing combinatorial optimization problems by linear programs (extended abstract). In Proc. STOC 1988, pages 223–228, 1988.
[3] V. Kaibel, K. Pashkovich, and D.O. Theis. Symmetry matters for the sizes of extended formulations. In Proc. IPCO 2010, pages 135–148, 2010.
[4] S. Fiorini, S. Massar, S. Pokutta, H. Tiwary, and R. de Wolf. Linear vs. semidefinite extended formulations: exponential separation and strong lower bounds. In STOC, pages 95–106, 2012.
[5] A. A. Razborov. On the distributional complexity of disjointness. Theoret. Comput. Sci., 106(2):385– 390, 1992.
[6] T.Rothvoß. The matching polytope has exponential extension complexity, In STOC, New York, NY, USA, 2014, ACM, pp. 263–272.
[7] T.Rothvoß. Some 0/1 polytopes need exponential size extended formulations, Math. Program. Ser. A, (2012), pp. 1–14.
[8] D. E. Knuth. The asymptotic number of geometries, J. Combinatorial Theory Ser. A 16 (1974), 398–400.
[9] Kaibel, V., Lee, J., Walter, M., & Weltge, S. (2015). Extended Formulations for Independence Polytopes of Regular Matroids. arXiv preprint arXiv:1504.03872.
[10] Iwata, S., Kamiyama, N., Katoh, N., Kijima, S., & Okamoto, Y. (2015). Extended formulations for sparsity matroids. Mathematical Programming, 1-10.
16 thoughts on “Extended Formulations and Matroid Polytopes”
1. Hi Tony,
what a great post. This extension rank could of great interest, I think.
1) I remember the argument from [7], that independence polytopes cannot all have small extension rank, like this: there are many matroids => there are many extended formulations of matroid
independence polytopes => these extended formulations cannot all have small rank. So that is not really existential, but starts with an explicit construction of many (sparse paving) matroids.
What it has in common with [8] is the making of a `compressed description’ of matroids, but where we use this to prove that there are few matroids, he uses his compression in reverse to show that
the `size’ of his description is not uniformly small.
2) er(M):=`extension rank of the independence polytope of M’ looks like it could be a very useful complexity measure for matroids. It certainly outperforms the `cover complexity’ of Jorn and me
on graphic matroids. I wonder if it shares these nice properties with cover complexity:
a) er(M\e) \le er(M)
b) er(M/e) \le er(M)
c) er(M) \le er(M/e)+er(M\e)
It would be great if you could put bounds on er(M) in terms of max{ er(N): N minor of M, rank(N)=s}.
3) there is a very interesting general relation between extension rank and communication protocols, which boils down to this for matroids:
The following are equivalent:
a) er(M)\le k
b) there is a randomized communication protocol for M that involves the exchange of at most log(k) bits between two parties, X and Y, so that if X is given a basis B of M and Y a flat F of M,
then the protocol will produce an estimate for |F\cap B| whose average value is the actual value (X and Y may use random bits in the execution of the protocol).
I can’t find the reference for this right now, but I’ll be back.
□ Hi Rudi. Thanks for your comments! Regarding 3b, the reference is Extended formulations, nonnegative factorizations, and randomized communication protocols by Y. Faenza, S. Fiorini, R.
Grappe, and H. R. Tiwary. The general method of ‘computing the slack matrix in expectation’ works to obtain upper bounds on the extension complexity of any polytope. For matroid independence
polytopes, if you write down the slack matrix, the randomized communication protocol does precisely what you said.
☆ Yes, that was the reference.
Now that I see that paper again, I have to correct my statement 3b. The protocol must not compute |B\cap F|, but rather the slack s(F,B):=rank(F)-|F\cap B|, and it is a restriction that
the protocol may only output a nonnegative estimate of s(F,B). This makes a difference!
The paper contains a description of a communication protocol for graphic matroids. Recommended reading.
So to prove your Conjecture 4, it would suffice to show that for a GF(q)-representable matroid on n elements, the exchange of at most O(log(n)) bits suffices to compute the slack s(F,B)
in expectation.
○ So the answer to 2a and 2b are yes (see the other conversation thread). I think the answer to 2c is also yes via Balas’ union of polyhedra trick. That is, the extension complexity of
the union two polytopes P and Q is at most xc(P)+xc(Q). In the case of matroid polytopes, the matroid polytope of M is the union of the matroid polytopes of M / e and M \ e (as can be
seen by conditioning if e is in your independent set or not).
■ Excellent!
Does that `union of polytopes’ trick also prove the following:
“Suppose M is a matroid of rank r and X_1, .. , X_k are sets of size t so that each subset of E(M) of cardinality r contains exactly one of these sets. Then xc(M)\le sum_i xc(M/
For any fixed r there are such X_i provided |E(M)| is large enough, so that would show for a matroid M on n elements of rank r and a t\le r that:
xc(M)/(n choose r) \le
max{xc(N): N minor of M, r(N)=r-t} / (n-t choose r-t)
I conjecture that this even holds for small n.
2. Interesting question. What is the source of belief in the conjectures?
Don’t conjectures 1 and 4 contradict each other? At least for signed graphic matroids?
□ Thanks for your comment. I don’t see why Conjectures 1 and 4 contradict each other. Signed graphic matroids are ternary, but Conjectures 1 and 4 assert that both classes have compact extended
formulations. The conjectures are based on the positive results thus far for ‘graphic like’ matroids. The classes get ‘less graphic’ as you progress from Conjectures 1 to 4, so it may very
well be that Conjectures 3 and 4 are false (or they all may be false).
☆ Sorry, somehow I read a negation into Conjecture 1. It makes a lot more sense now.
Next question… is it immediate whether having a polynomial-sized extended formulation is closed under taking minors?
○ Yes, I think this is clear. If N is a minor of M, then the independence polytope of N is a coordinate projection of a face of the independence polytope of M. Therefore, the extension
complexity of N is at most the extension complexity of M.
■ Does the extension complexity go down by exactly 1 if you relax a circuit-hyperplane?
Edit: Hmm, that can’t be true. A graph may have exponentially many circuit-hyperplanes. Second try:
Does the extension complexity go down, or stay the same, if you relax a circuit-hyperplane?
○ To me it seems that complexity is hardly closed under minors. If i have a set of matroids (or graphs) and an algorithm that solves some problem on that set. Now take a a minor M of a
member M’ in that class. How to to apply a the algorithm If M’ is too big.
Example: stable set problem on graphs with n notes and a clique of size n-log n is polynomilally solvable as there are only polynomially many stable sets. But if you close the class
under induced subgraphs you get all graphs. You know for any input graph it is in the class, so membership of the input is not the issue, the issue is that the algorithm needs an
input that is too big. For matroids and extended formulations similar examples exist.
The sense in complexity issues for minor closed classes doesn not lie in the minor-closedness, but on some tangible structure that comes with the class.
■ Where I wrote:
“The sense in complexity issues for minor closed classes does not lie in the minor-closedness, but on some tangible structure that comes with the class”,
it would have been more to the point to write:
The sense in complexity issues for proper minor closed classes doesn not lie in the minor closedness but in the properness.
■ Hi Bert,
Extension complexity of M is not a measure of how fast decision problems concerning M can be solved, but how much bits it takes to describe M.
If you allow the most generally capable machine to reconstruct the matroid M from your description, then this smallest length of the description is just the Kolmogorov complexity
of M.
Here the description has to take the form of an extended formulation of the independence polytope. So that is a very restricted machine, but it is still surprisingly good at
picking up the graphic structure of a matroid.
You expect from a good complexity measure that it is roughly minor-monotone. A Kolmogorov string describing a minor N of M could always say: here’s a description of M and here is
how you get me as a minor. The latter is going to be few bits compared to the bits it takes to describe a matroid M.
In the same way, you would also expect that the sum of the complexities of M/e and M\e is an upper bound for the complexity of M. Perhaps Tony can show that too for the extension
■ Hi Bert! Thanks for the comment. Yes, you are absolutely correct. I realize that I was not answering the question that was asked. While it is true that extension complexity is
minor monotone, the number of elements also decreases as you pass to minors so the new class that you get from closing your class under minors might not have polynomial size
extension complexity even if your original class does. For example, I think binary projective geometries have small extension complexity (at most quasi-polynomial). However, if
you close this class under minors, you get all binary matroids. I can imagine that this class might have large extension complexity (even though I conjectured otherwise above).
3. Rudi,
I know extension complexity is not about how quick the problem can be solved. My point was that complexity of any kind is generally not closed under minors.
I gave an example concerning computational complexity of stable sets and the induced subgraph order as I found that eassier to explain.
For matroids on n elements with a component isomorphic to a n-log n point line, even the matroid polytope itself has polynomially many
If you close that class under minors you get all matroids. As long as you do not calculate the input size for ‘that class’ after adding the huge line, your lost
□ I realized that I misread your words the minute I read Tony’s reply. Sorry about that 🙂 You made a valid point!
This site uses Akismet to reduce spam. Learn how your comment data is processed. | {"url":"http://matroidunion.org/?p=1603","timestamp":"2024-11-13T19:12:35Z","content_type":"text/html","content_length":"56407","record_id":"<urn:uuid:28e8a5e3-212a-4ed9-b531-6b1a1b5f339b>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00431.warc.gz"} |
Equations Three Ways
I've never really been satisfied with how I teach students to solve equations. No matter what, it ends up being one big algorithm and kids have no idea why one side of the equation is equal to the
other. Here's what I'm doing to try to fix that.
Modeling Strength:
It's not math. It's a puzzle.
Dealing with negatives is a real pain in the butt.
Guess and Check
I actually really like this method. Guess and check is probably my most under-used problem solving strategy, but using it to solve equations has been really helpful. I've noticed a greater
understanding of rate(s) of change, using information from
answers to help find right ones and checking answers--something most kids don't want to do--is embedded in the process.
We've gotten to the point where we can nail the answer on the third guess by using the information gained in the first two--even for equations with non-integer solutions.
: Students understand that simplified expressions on each side of the equal sign end up looking the same every time (ax + b = cx + d). Rate of change is very useful. Being wrong helps you to become
right. Did I mention they are checking their answers?
Leave 'em in the comments.
From Construction to Deconstruction
We spend a lot of time teaching kids how to break things down whether it's reducing fractions, simplifying radicals or solving equations but we rarely (read: I rarely) have taught them how to
construct things that may eventually need to be deconstructed.
Constructing a more complicated equation from a simple equation has helped my students understand that, no kidding, the two expressions on opposite sides of the equal sign are equivalent. I've used
the just-unwrap-the-present illustration many times, but we really need to teach the students to wrap one up first. Having them list their steps for construction makes the process for actually
solving the equation seem much more natural. When I say, "just use the inverse order of operations" --or whatever completely abstract thing I've been known to throw out there in order to make myself
feel better when they keep screwing it up--it makes no sense to them. This helps.
Kids get a grasp of which operation to tackle first while solving for x.
: Very complicated equations with variables on both sides don't seem so natural when you begin with x = 2.
I've heard rumors that there are some teachers who actually teach solving equations by graphing. Never seen it in the wild, though.
7 comments:
I have had some success with "construction to deconstruction" by asking students to make up problems to try to stump their colleagues. This prompt motivated at least some of them to try to cook
up very complicated things.
For modeling, the negatives could be balloons?
Nice call, Andrew.
Yeah, we could use negatives but it gets a bit confusing when you need to "take away negatives" from one side when there aren't any on the other. You'd need to add "zeroes" in the form of pairs
of positives and negatives. That's not quite as intuitive as I'd like.
At the very beginning, I teach solving by graphing. They make a table for y = 4x + 5 (or whatever) and graph it. Then we look at the graph and I ask "What is y when x is 2?" questions for a
while, then switch to "What is x when y is 1?"
I motivate other methods of solving by "there's got to be a better way to get the answer to that question..."
I like doing it this way because it really stresses variables as representing all different values, rather than having students think that x always represents a single number, which you just have
to find.
The way we teach our kids with calculator accomodations is to put one side of the equal sign in Y1 and the other side in Y2, graph the lines, and find the intersection. The biggest weakness
(besides not learning how to manipulate equations) is that they have to learn how to manipulate the graphing window.
Example: Y1=x+7
While I really like guess-and-check as an introductory strategy, it makes me wonder how we can help students codify the intuition that they develop in using it; even if students are able to get
the right answer fairly efficiently once they start to develop a sense of how it works, I'd want them to be able to explain why their guesses are good guesses rather than just saying "it felt
I'm not clear from your post, but it feels like this set of activities, in the order in which you describe them, is actually a strong sequence of scaffolding as students progress from easier
equations to more challenging ones and can revert back to other methods and graphing to check their work (and to reinforce that there's more than one way to solve a problem).
We played around with graphing the expressions set equal to y. We plugged them into GeoGebra and just looked around for things they thought were interesting. They were like, "hey, the x value for
the point of intersection is our answer and the y value is the number we get when we know our guess was right."
I couldn't agree more. Sorry I wasn't more clear in the post.
The modeling comes first as a way of just playing with the idea of equivalent expressions. They solve without really "solving."
The guess and check gets them to simplify expressions without worrying about solving and has checking embedded. (I have always felt like it was a fight to get kids to check their solutions)
As a class, we kind of decided that 0 was a good first guess because it's easy to use (also sets the table for finding y intercepts later) and 1 is a pretty good second guess because it gives us
the rate of change between the answers. Once they have locked in on rate of change, they only need a third guess.
Normally, when I teach guess and check, I encourage guessing too high and then too low (or vice versa), but the type of information we gain from two guesses one unit apart is too important to
pass up.
Once we have those two dialed in, we can go to construction and kids can get all kinds of crazy (As Andrew suggested) with their equations and guarantee that the solution is an integer. It also
helps formalize the "steps to solving an equation."
I didn't quite plan it out as well as it seems, but after some reflection, I can't wait to get to equations with my 7th graders. In fact, we have already started playing with modeling. | {"url":"https://coxmath.blogspot.com/2010/09/equations-three-ways.html","timestamp":"2024-11-06T08:46:23Z","content_type":"text/html","content_length":"83677","record_id":"<urn:uuid:6d3e78d2-0ffb-486b-9324-b5fd3846e12c>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00006.warc.gz"} |
20+ Free Algebra Books for Students & Teachers (PDF Ebooks) – Gaurav Tiwari
20+ Free Algebra Books for Students & Teachers (PDF Ebooks)
Looking for Free Algebra and Topology PDF E-books?
I’ve compiled a comprehensive list of high-quality, free e-books covering Algebra, Topology, and other related mathematics topics. These resources are ideal for both students and teachers seeking
valuable content for their studies or teaching. Download these now and enhance your learning experience!
Let's start.
Abstract Algebra Online by Prof. Beachy
This site contains many of the definitions and theorems from the area of mathematics generally called abstract algebra. It is intended for undergraduate students taking an abstract algebra class at
the junior/senior level, as well as for students taking their first graduate algebra course. It is based on the books Abstract Algebra, by John A. Beachy and William D. Blair, and Abstract Algebra II
, by John A. Beachy.
Read/Download Abstract Algebra Online by Prof. Beachy
Understanding Algebra by James Brennan
This text is suitable for high-school Algebra I, as a refresher for college students who need help preparing for college-level mathematics, or for anyone who wants to learn introductory algebra.
Ebook version can be bought from here / Mirror.
Official page: Understanding Algebra by James Brennan
Abstract Algebra : Theory and Applications by Tom Judson
Abstract Algebra: Theory and Applications is an open-source textbook written by Tom Judson that is designed to teach the principles and theory of abstract algebra to college juniors and seniors in a
rigorous manner. Its strengths include a wide range of exercises, both computational and theoretical, plus many nontrivial applications.
Download Page (Annual Editions can be downloaded for free!)
Elements of Abstract and Linear Algebra
A foundational textbook on abstract algebra with emphasis on linear algebra.
Download full book in PDF
A first course in Linear Algebra
A First Course in Linear Algebra is an introductory textbook designed for university sophomores and juniors. Typically such a student will have taken calculus, but this is not a prerequisite.
The book begins with systems of linear equations, then covers matrix algebra, before taking up finite-dimensional vector spaces in full generality.
The final chapter covers matrix representations of linear transformations, through diagonalization, change of basis and Jordan canonical form. Along the way, determinants and eigenvalues get fair
There is a comprehensive online edition and PDF versions are available to download for printing or on-screen viewing.
Abstract Algebra by Robert Ash
Abstract Algebra: The Basic Graduate Year, A Course in Algebraic Number Theory, and, A Course in Commutative Algebra are three e-books by Robert Ash and are available here on his website.
Linear Algebra by Leif Mejlbro
Leif has published many of his books, sixty-six to be precise, freely on Bookboon.com . You can see what he offers freely at http://bookboon.com/en/search?q=author%3A%22Leif%20Mejlbro%22 .
In his Linear Algebra book-series he offers great details and excellent write up. He has aimed this series to be a practical guide for students in Physics and the technical sciences. For that reason
the emphasis has been laid on worked examples, while the mathematical theory is only briefly sketched without proofs. There are total three books on Linear Algebra, all hosted at Bookboon.com all of
which can be downloaded freely.
Group theory by Arjeh Cohen, Rosane Ushirobira, Jan Draisma
Symmetry plays an important role in chemistry and physics, both at the macroscopic and the microscopic level. Group theory is an abstract setting capturing the symmetry in a very efficient manner,
which helps to make computations more efficient. We focus on abstract group theory, deal with representations of groups by means of permutations and by means of matrices, and deal with some
applications in chemistry and physics.
Intro to Abstract Algebra by Paul Garrett
The text covers basic algebra of polynomials, induction and the well-ordering principle, sets, counting principles, integers, unique factorization into primes, prime numbers, Sun Ze’s theorem, hood
algorithm for exponentiation, Fermat’s little theorem, Euler’s theorem, public-key ciphers, pseudoprimes and primality tests, vectors and matrices, motions in two and three dimensions, permutations
and symmetric groups, rings and fields, etc.
Other Great Books on Algebra
This list is expandable. If you know any other book on Algebra which is available online for free, then please give a few seconds and put that into the Comment-Box below, with the link. (It supports
basic HTML and Markdown writing.) | {"url":"https://gauravtiwari.org/free-online-algebra-books/?utm_source=self&utm_medium=related&utm_campaign=related_posts","timestamp":"2024-11-10T18:56:00Z","content_type":"text/html","content_length":"79053","record_id":"<urn:uuid:2d167ee0-a98c-430e-a374-a03efdf099e1>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00099.warc.gz"} |
Sampler of directed graphs with given degree sequence
An efficient algorithm for sampling directed graphs
About the code
The code is an implementation of the algorithm described in [1]. It provides an efficient way to perform sampling of the realizations of any given bi-degree sequence. Previously existing graph
sampling methods were either link-swap based (Markov-Chain Monte Carlo algorithms) or stub-matching based (the Configuration Model). Both types are ill-controlled, with typically unknown mixing times
for link-swap methods and uncontrolled rejections for the Configuration Model. Conversely, this is an efficient, polynomial time algorithm that generates statistically independent directed graph
samples with a given, arbitrary, bi-degree sequence. Unlike other algorithms, this degree-based method [2] always produces a sample, without backtracking or rejections.
If you use this code for your research, I kindly ask you to cite Ref. 1 in your publications.
Download the code
How to use
The code consists of the files DiGSampler.c, DiGSampler.h and DiGSamp.h. To use it, just include DiGSamp.h in your code, and compile DiGSampler.c with your other source files, remembering to use the
option -lm as it needs to link to the math library.
The code defines two new data structures, called bds and digraph. The members of the structure bds, used internally by the sampling algorithm, are
• int *indegree
• int *outdegree
• int *label
• int *forbidden
The members of the structure digraph are
Before starting sampling a bi-degree sequence, the sampler needs to be initialized by invoking the function digsaminit. The prototype of the function is
void digsaminit(int **seq, const int n)
where n is the number of nodes in the sequence, and seq is a pointer to an nX2 matrix containing the bi-degree sequence. The sequence has to be stored so that, given an index i, seq[i][0] and seq[i]
[1] are the in-degree and the out-degree of the ith node, respectively.
To create and store a sample realization of a given bi-degree sequence, the user should invoke the function digsam. The prototype of the function is
digraph digsam(double (*rng)(void), const int stfl)
The function returns a realization of the bi-degree sequence for which it has been initialized, using the user-specified random number generator rng.
The sequence contained in seq must be lexicographically ordered
. The random number generator must be a function taking no input parameters and returning a double precision floating point number between 0 and 1. The generator must be already seeded. This leaves
the user the choice of the generator to use. The variable stfl is a flag governing the way target nodes are chosen for connection: if set to 0, the nodes are chosen randomly amongst those allowed; if
set to anything but 0, the nodes are chosen with a probability proportional to their residual in-degree. Given a sequence and random number generator, the user creates a sample by declaring a
variable G of type digraph, and then assigning it the return value of digsam:
G = digsam(rng,0);
After the assignment, G.list is a densely allocated matroid containing the out-adjacency list, and G.weight is the logarithm of the weight associated with that particular sample. New samples of the
sequence can be obtained invoking gsam again.
Please note that the out-adjacency list of a previous sample is destroyed with further calls to the sampler,
even if the sample is assigned to a different variable
. Thus, for instance, the lines
G = digsam(rng,1);
H = digsam(rng,1);
will result in the same out-adjacency list stored in G.list and in H.list. Also note that while the user can switch at will between uniform and degree-proportional choice of the target nodes, samples
and weights of only one kind should be used for statistical averages.
After finishing sampling a given sequence, the memory used should be cleaned by invoking digsamclean(). Afterwards, the sampler is ready to be initialized again with another bi-degree sequence.
A minimal proof of concept program is included, in the file poc.c. The code can be compiled on a standard GNU/Linux distribution with the command
gcc -std=c99 -lm -o poc DiGSampler.c poc.c
The program invokes the directed graph sampling algorithm to produce 10 realizations of the sequence {(3, 0), (3, 0), (1, 2), (1, 2), (1, 2), (1, 2), (1, 2), (1, 2)}, using a simple random number
generator. The first 5 samples are created using uniform choice of nodes, and the remaining using degree-proportional selection. After the generation of each sample, the out-adjacency list and the
logarithm of the weight are displayed on screen. Please note that (pseudo) random number generation for scientific or cryptographic applications is a complex subject, and the actual generator to use
in publication-level sampling should be an established, tested, one. In the proof of concept, a simple one is used just for sake of simplicity. It should probably not be used otherwise, and
definitely not be used for cryptographic applications, as there exist more appropriate and far better generators.
One of the return values provided by the code is the logarithm of the weight associated with each sample, to be used in an expression for the weighted mean of some observable. To avoid dealing with
numbers of substantially different order of magnitude, a useful trick is to employ a formula for the logarithm of the sum. This way, one can find directly log(a+b) knowing log(a) and log(b). To see
how this works, call x=log(a), y=log(b), and result=log(a+b). Then the following chain of identities holds:
log(a+b) =
= log(a*(1+b/a))
= log(a) + log(1+b/a)
= x + log(1+b/a)
= x + log(1+exp(y)/exp(x))
= x + log(1+exp(y-x))
Now notice that, if y>x, then y-x>0, and therefore the exponential in the expression above can grow without control. However, if y≤x, then the argument of that same exponential is negative or 0.
Then, in this case, that exponential will be a real number between 0 and 1. If it is so, then the second term in the sum is log(1+ε), with ε between 0 and 1. But this is a very easily computed
quantity, as it can be comfortably and precisely expanded in series, so much that the C programming language even has a function for it (log1p). Then, knowing x and y, all one needs to do is to make
sure that y≤x. Since the sum is a symmetric operation, all this can be easily written in C as
result = fmax(x,y) + log1p(exp(-fabs(x-y)));
fmax returns whichever is greater between x and y, fabs returns the absolute value of the difference between x and y, and the minus sign before it makes sure that the exponential is negative. Since
the exponential is quite small, the whole formula is particularly stable.
Aside from this, there are still some caveats. The first is that, in the weighted mean formula for a series of observable measurements Q_i with weights w_i, it's probably better not to compute the
sum of w_iQ_i and the sum of w_i independently and then subtract the logarithms. Instead, one can use a stable algorithm to directly compute the ensemble average of Q on the fly. A particularly
well-suited algorithm is West's algorithm [3], which is very straightforward. An easy explanation of the algorithm can be found
, under the section "Weighted incremental algorithm". As a good side-effect, the algorithm will also provide the uncertainty associated with the ensemble average of Q. Notice that, as it's discussed
in West's original paper, this algorithm should be used only when one cannot save all the data and analyse them later, in which case the best choice would be a two-pass algorithm.
A second point of caution is that when computing mean and standard deviation, one often ends up not just summing, but also subtracting. In fact, subtractions are carried out about 50% of the times.
The above formula for the logarithmic sum can be adapted for subtraction too, becoming
log(a-b) = x + log(1-exp(y-x)),
or, in C,
result = x + log1p(-exp(y-x));
The formula is always valid in the general case of a>b, but it's not as stable as that of the sum. The reason is that log(1+ε) changes relatively slowly for ε>0, but it changes quite quickly for ε<0.
However, this is not too big a concern in the case of a mean and standard deviation calculation. In fact, West's algorithm converges relatively quickly to the correct value. This means that the
amplitude of the oscillations around the actual ensemble average will quickly decrease. Thus, potentially the only problematic situations can happen for the first few terms in the calculation of the
mean (or better, for half of them), but typically this is not a problem.
Finally, the last thing to be aware of is that of course the logarithmic formulae above will work only if one is dealing with positive numbers. However, some observables could very well be negative.
For instance, one might be measuring one of the assortativity coefficients of a graph. In this case, the range of possible values for the observable would be from -1 to 1. Anyway, problems such as
this are easily solved if one knows the theoretical range of the measurements. Then, one can artificially sum a certain same number to all the measurements, and average over these "shifted" results.
In the assortativity example, one could sum 2 to all the measurements, thus making sure that the averages would be over the range 1 to 3, and then subtract 2 from the final result. This would
guarantee that one never tries to take the logarithm of a negative number, and, importantly, that one can always say, a priori, which is the greater of the two numbers involved at every step.
[1] Kim, Del Genio, Bassler and Toroczkai, New J. Phys. 14, 023012
[2] Del Genio, Kim, Toroczkai and Bassler, PLoSOne 5 (4), e10012
[3] West, Commun. ACM 22, 532-535
Release information
Current version
: fixed function prototyping.
Old versions
: fixed compliance with C99 standard.
: fixed unchecked buffer overrun, which could have been triggered if the first node in the (reordered) derivative sequence built to find the fail in-degree had an out-degree equal to the number of
nodes in the sequence minus 1.
: fixed potential memory leak condition, triggered if the code was used to generate samples from many different sequences. Thanks to Lyle Muller for signaling the bug. | {"url":"https://charodelgenio.weebly.com/directed-graph-sampling.html","timestamp":"2024-11-13T02:37:36Z","content_type":"text/html","content_length":"39482","record_id":"<urn:uuid:5051f449-324b-4299-b2a9-2b07eaaf484d>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00632.warc.gz"} |
The Stacks project
Lemma 63.3.6. Let $Y$ be a scheme. Let $j : X \to \overline{X}$ be an open immersion of schemes over $Y$ with $\overline{X}$ proper over $Y$. Denote $f : X \to Y$ and $\overline{f} : \overline{X} \to
Y$ the structure morphisms. For $\mathcal{F} \in \textit{Ab}(X_{\acute{e}tale})$ there is a canonical isomorphism (see proof)
\[ f_!\mathcal{F} \longrightarrow \overline{f}_!j_!\mathcal{F} \]
As we have $\overline{f}_! = \overline{f}_*$ by Lemma 63.3.4 we obtain $\overline{f}_* \circ j_! = f_!$ as functors $\textit{Ab}(X_{\acute{e}tale}) \to \textit{Ab}(Y_{\acute{e}tale})$.
Comments (0)
Post a comment
Your email address will not be published. Required fields are marked.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
All contributions are licensed under the GNU Free Documentation License.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder,
this is tag 0F52. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 0F52, in case you are confused. | {"url":"https://stacks.math.columbia.edu/tag/0F52","timestamp":"2024-11-12T00:43:34Z","content_type":"text/html","content_length":"15223","record_id":"<urn:uuid:81a6e05b-4d7a-4bf3-abed-b7e1163f62a0>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00586.warc.gz"} |
Tom Hodson - Interactive web maps from a static file
Interactive web maps from a static file
PMTiles is a new project that lets you serve vector map data from static files through the magic of HTTP range requests.
The vector data is entirely served from a static file on this server. Most interactive web maps work by constantly requesting little map images from an external server at different zoom levels. This
approach uses much less data and doesn’t require an external server to host all the map data.
Getting this to work was a little tricky, I mostly followed the steps from Simon Willison’s post but I didn’t want to use npm. As I write this I realise that this site is generated with jekyll
which uses npm anyway but somehow I would like the individual posts to Just Work™ without worrying about updating libraries and npm.
So I grabbed maplibre-gl.css, maplibre-gl.js and pmtiles.js, plonked them into this site and started hacking around. I ended up mashing up the code from Simon Willison’s post and the official
examples to get something that worked.
I figured out from this github issue how to grab a module version of protomaps-themes-base without npm. However I don’t really like the styles in produces. Instead I played around a bit with the
generated json styles to make something that looks a bit more like the Stamen Toner theme.
Looking at the source code for protomaps-themes-base I realise I could probably make custom themes much more easily by just swapping out the theme variables in the package.
• Figure out how to use maputnik to generate styles for PMTiles. | {"url":"https://thomashodson.com/2023/10/30/maps-3.html","timestamp":"2024-11-07T18:57:53Z","content_type":"text/html","content_length":"15159","record_id":"<urn:uuid:f9135321-e6e0-4522-a07d-ee1f9ef9ba9b>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00853.warc.gz"} |
Simulating Uncertainty
Previously I posted about a simulation of the Apollo 10 LM descent stage which shows that the stage remains in lunar orbit to the present day. How robust is this result? The data for the initial
stage orbit comes from the Mission Report...no doubt it was the best information they had. But fifty years in lunar orbit is a very long time, and lunar gravity is notoriously "lumpy". What if a
slight change in the initial stage orbit state meant the difference between stability and decay?
To answer this question, I ran a set of 50 simulations, each with the initial conditions randomly varied to cover any possible miss in the initial state of the stage. I tried to keep the variation
wide, to insure I covered the real conditions, but I also stuck to reasonable limits. In fact the variations I applied were so wide that many of the orbits were not viable. To cover this, I ran each
parameter set through one orbit, recording the apolune and perilune...the low and high points of the orbit. I cut any set that was lower or higher by more than 20% from the values NASA reported. Only
about one third of the random sets passed this test. To get 50 sets for the final test I passed more than 150 sets through the initial 1-orbit screen.
Here is a plot of the perilune points for all 50 random parameter sets, showing their minimum orbit altitude after 10 years in orbit. It's a bit messy, as these orbits show quite a bit of variation.
But the important thing to note is that in all 50 cases, the stage was still in orbit after 10 years. Each one of these plots is very similar to the "nominal" orbit I simulated initially. What if we
just find the one of these, out of the 50, that got lower than any of the others, and plot it out by itself? Here it is.
You can see that this one did indeed make a rather low pass, in December of 1979, to about 12 km above the mean radius. (Still well above any lunar mountains.) And if you saw my earlier post about
the stage orbit behavior, you see the same patterns here. The oscillation over a period of 25 days, and a longer oscillation with a period of around 5 months. Why did this one get lower than the
others? It was one of the lowest initially, so it is hardly surprising. The real question is whether this one is any less stable over decades than the "nominal" orbit that I showed before. What
happens if we simulate this orbit out to the present? Here is the answer:
It is every bit as stable in it's orbit as the "nominal" case. There is no long term decay in evidence, and the simulated stage remains in orbit to the present day.
To me, this represents rather convincing proof. The result I got the first time I ran a simulation out to 50 years was no fluke. The nominal stage orbit is just one of a family of similar orbits that
all exhibit long term stability. If something knocked the stage out of orbit during those 50 years, it wasn't the moon's lumpy gravity.
No comments: | {"url":"https://snoopy.rogertwank.net/2020/02/simulating-uncertainty.html","timestamp":"2024-11-06T21:40:16Z","content_type":"text/html","content_length":"44930","record_id":"<urn:uuid:93660d30-615b-402b-992c-cfbb733d1cd2>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00111.warc.gz"} |
The Recurrence Problem
About Articles Books Lectures Presentations Glossary Cite page Help? Translate
Philosophers The Recurrence Problem or Wiederkehreinwand
The idea that the macroscopic conditions in the world will repeat after some interval of
Mortimer Adler time is an ancient idea, but it plays a vital role in modern physics as well. Ancient
Rogers Albritton middle eastern civilizations called it the Great Year. They calculated it as the time after
Alexander of which the planets would realign themselves in identical positions in the sky.
Aphrodisias The Great Year should not be confused with the time that the precession of the equinoxes
Samuel Alexander takes to return the equinoxes to the same position along the Zodiac - although this time
William Alston (about 26,000 years) is of the same order of magnitude as one famous number given by
Anaximander Babylonian astronomers for the Great Year (36,000 years). Many societies have the concept
G.E.M.Anscombe of the Great Year, but none did calculations as carefully as the Babylonians. But since the
Anselm planets orbital periods are not really commensurate, they kept increasing the time for the
Louise Antony Great Year in the search for a better recurrence time. The Greek and Roman Stoics thought
Thomas Aquinas the Great Year was a sign of the rule of law in nature and the God of reason that lay
Aristotle behind nature.
David Armstrong
Harald Nietsche's Eternal Return
Robert Audi In modern philosophy, Friedrich Nietzsche described an eternal return in his Also Sprach
Augustine Zarathustra.
J.L.Austin Zermelo's Paradox
A.J.Ayer Zermelo's paradox was a criticism of Ludwig Boltzmann's H-Theorem, his failed attempt to
Alexander Bain derive the increasing entropy required by the second law of thermodynamics from basic
Mark Balaguer classical dynamics. It was the second "paradox" attack on Boltzmann. The first was Josef
Jeffrey Barrett Loschmidt's claim that entropy would be reduced if time were reversed. This is the problem
William Barrett of microscopic reversibility. The two problems of reversibility (Loschmidt's paradox or
William Belsham Umwiederkehreinwand) and recurrence (Zermelo's Paradox or Wiederkehreinwand) were both
Henri Bergson raised by 19th-century critics against Boltzmann's failed H-theorem attempt to explain the
George Berkeley increase of thermodynamic entropy. Ernst Zermelo was an extraordinary mathematician. He was
Isaiah Berlin (in 1908) the founder of axiomatic set theory, which with the addition of the axiom of
Richard J. choice (also by Zermelo, in 1904) is the most common foundation of mathematics. The axiom
Bernstein of choice says that given any collection of sets, one can find a way to unambiguously
Bernard Berofsky select one object from each set, even if the number of sets is infinite. Before this
Robert Bishop amazing work, Zermelo was a young associate of Max Planck in Berlin, one of many German
Max Black physicists who opposed the work of Boltzmann to establish the existence of atoms. Zermelo's
Susanne Bobzien criticism was based on the work of Henri Poincaré, an expert in the three-body problem,
Emil du which, unlike the problem of two particles, has no exact analytic solution. Where
Bois-Reymond two-bodies can move in paths that may repeat exactly after a certain time, three bodies may
Hilary Bok only come arbitrarily close to an initial configuration, given enough time. Poincaré had
Laurence BonJour been able to establish limits or bounds on the possible configurations of the three bodies
George Boole from conservation laws. Planck and Zermelo applied some of Poincaré's thinking to the n
Émile Boutroux particles in a gas. They argued that given a long enough time, the particles would return
Daniel Boyd to a distribution in "phase space" (a 6n dimensional space of possible velocities and
F.H.Bradley positions) that would be indistinguishable from the original distribution. This is called
C.D.Broad the Poincaré "recurrence time." Thus, they argued, Boltzmann's formula for the entropy
Michael Burke would at some future time go back down, vitiating Boltzmann's claim that his measure of
Lawrence Cahoone entropy always increases - as the second law of thermodynamics requires. Poincaré'
C.A.Campbell described his view in 1890.
Joseph Keim
Campbell A theorem, easy to prove, tells us that a bounded world, governed only by the laws of
Rudolf Carnap mechanics, will always pass through a state very close to its initial state. On the
Carneades other hand, according to accepted experimental laws (if one attributes absolute
Nancy Cartwright validity to them, and if one is willing to press their consequences to the extreme),
Gregg Caruso the universe tends toward a certain final state, from which it will never depart. In
Ernst Cassirer this final state, which will be a kind of death, all bodies will be at rest at the same
David Chalmers temperature. I do not know if it has been remarked that the English kinetic theories
Roderick Chisholm can extricate themselves from this contradiction. The world, according to them, tends
Chrysippus at first toward a state where it remains for a long time without apparent change; and
Cicero this is consistent with experience; but it does not remain that way forever, if the
Tom Clark theorem cited above is not violated; it merely stays there for an enormously long time,
Randolph Clarke a time which is longer the more numerous are the molecules. This state will not be the
Samuel Clarke final death of the universe, but a sort of slumber, from which it will awake after
Anthony Collins millions of millions of centuries.
Corradini Poincaré's "little patience" would be severely tried by Boltzmann's calculation that
Diodorus Cronus even a small number of particles would not recur in his "millions and millions of
Jonathan Dancy centuries"
Donald Davidson
Mario De Caro According to this theory, to see heat pass from a cold body to a warm one, it will not
Democritus be necessary to have the acute vision, the intelligence, and the dexterity of Maxwell's
Daniel Dennett demon; it will suffice to have a little patience. One would like to be able to stop at
Jacques Derrida this point and hope that some day the telescope will show us a world in the process of
René Descartes waking up, where the laws of thermodynamics are reversed.
Richard Double
Fred Dretske Boltzmann replied that his argument was statistical. He only claimed that entropy increase
John Dupré was overwhelmingly more probable than Zermelo's predicted decrease. Boltzmann calculated
John Earman the probability of a decrease of a very small gas of only a few hundred particles and found
Laura Waddell the time needed to realize such a decrease was many orders of magnitude larger than the
Ekstrom presumed age of the universe. The idea that a macroscopic system can return to exactly the
Epictetus same physical conditions is closely related to the idea that an agent may face "exactly the
Epicurus same circumstances" in making a decision. Determinists maintain that given the "fixed past"
Austin Farrer and the "laws of nature" that the agent would have to make exactly the same decision again.
Herbert Feigl
Arthur Fine The Extreme Improbability of Perfect Recurrence
John Martin
Fischer In a classical deterministic universe, such as that of Laplace, where information is
Frederic Fitch constant, Zermelo's recurrence is mathematically possible. Given enough time, the universe
Owen Flanagan can return to the exact circumstance of any earlier instant of time, because it contains
Luciano Floridi the same amount of matter, energy, and information. But, in the real universe, David Layzer
Philippa Foot has argued that information (and the material content of the universe) expands from a
Alfred Fouilleé minimum at the origin, to ever larger amounts of information. Consequently, it is
Harry Frankfurt statistically and realistically improbable (if not impossible) for the universe as a whole
Richard L. to return to exactly the same circumstance of any earlier time.
Franklin Arthur Stanley Eddington was probably the first to see that the expanding universe provides
Bas van Fraassen a resolution to Zermelo's objection to Boltzmann.
Michael Frede
Gottlob Frege By accepting the theory of the expanding universe we are relieved of one conclusion
Peter Geach which we had felt to be intrinsically absurd. It was argued that every possible
Edmund Gettier configuration of atoms must repeat itself at some distant date. But that was on the
Carl Ginet assumption that the atoms will have only the same choice of configurations in the
Alvin Goldman future that they have now. In an expanding space any particular congruence becomes more
Gorgias and more improbable. The expansion of the universe creates new possibilities of
Nicholas St. John distribution faster than the atoms can work through them, and there is no longer any
Green likelihood of a particular distribution being repeated. If we continue shuffling a pack
H.Paul Grice of cards we are bound sometime to bring them into their standard order — but not if the
Ian Hacking conditions are that every morning one more card is added to the pack.
Ishtiyaque Haji
Stuart Hampshire H. Dieter Zeh also sees that the age of the universe being much less than the Poincaré
W.F.R.Hardie recurrence time may invalidate the recurrence objection.
Sam Harris
William Hasker Another argument against the statistical interpretation of irreversibility, the
R.M.Hare recurrence objection (or Wiederkehreinwand), was raised much later by Ernst Friedrich
Georg W.F. Hegel Zermelo, a collaborator of Max Planck at a time when the latter still opposed atomism,
Martin Heidegger and instead supported the 'energeticists', who attempted to understand energy and
Heraclitus entropy as fundamental 'substances'. This argument is based on a mathematical theorem
R.E.Hobart due to Henri Poincaré, which states that every bounded mechanical system will return as
Thomas Hobbes close as one wishes to its initial state within a sufficiently large time. The entropy
David Hodgson of a closed system would therefore have to return to its former value, provided only
Shadsworth the function F(z) is continuous. This is a special case of the quasiergodic theorem
Hodgson which asserts that every system will corne arbitrarily close to any point on the
Baron d'Holbach hypersurface of fixed energy (and possibly with other fixed analytical constants of the
Ted Honderich motion) within finite time. While all these theorems are mathematically correct, the
Pamela Huby recurrence objection fails to apply to reality for quantitative reasons. The age of our
David Hume Universe is much smaller than the Poincaré recurrence times even for a gas consisting
Ferenc Huoranszki of no more than a few tens of particles. Their recurrence to the vicinity of their
Frank Jackson initial states (or their coming close to any other similarly specific state) can
William James therefore be excluded in practice. Nonetheless, some 'foundations' of irreversible
Lord Kames thermodynamics in the literature rely on formal idealizations that would lead to
Robert Kane strictly infinite Poincaré recurrence times (for example the 'thermodynamical limit' of
Immanuel Kant infinite particle number). Such assumptions are not required in our Universe of finite
Tomis Kapitan age, and they would not invalidate the reversibility objection (or the equilibrium
Walter Kaufmann expectation, mentioned above). However, all foundations of irreversible behavior have
Jaegwon Kim to presume some very improbable initial conditions...
William King
Hilary Kornblith In order to reverse the thermodynamical arrow of time in a bounded system, it would not
Christine therefore suffice to "go ahead and reverse all momenta" in the system itself, as
Korsgaard ironically suggested by Boltzmann as an answer to Loschmidt. In an interacting
Saul Kripke Laplacean universe, the Poincaré cycles of its subsystems could in general only be
Thomas Kuhn those of the whole Universe, since their exact Hamiltonians must always depend on their
Andrea Lavazza time-dependent environment.
Christoph Lehner
Keith Lehrer Chapter 5.6 - Mind-Body Problem Chapter 5.8 - Reversibility
Gottfried Leibniz Part Four - Freedom Part Six - Solutions
Jules Lequyer
Leucippus Normal | Teacher | Scholar
Michael Levin
Joseph Levine
George Henry
David Lewis
Peter Lipton
C. Lloyd Morgan
John Locke
Michael Lockwood
Arthur O. Lovejoy
E. Jonathan Lowe
John R. Lucas
Ruth Barcan
Tim Maudlin
James Martineau
Nicholas Maxwell
Storrs McCall
Hugh McCann
Colin McGinn
Michael McKenna
Brian McLaughlin
John McTaggart
Paul E. Meehl
Uwe Meixner
Alfred Mele
Trenton Merricks
John Stuart Mill
Dickinson Miller
Thomas Nagel
Otto Neurath
John Norton
Robert Nozick
William of Ockham
Timothy O'Connor
David F. Pears
Charles Sanders
Derk Pereboom
Steven Pinker
Karl Popper
Huw Price
Hilary Putnam
Willard van Orman
Frank Ramsey
Ayn Rand
Michael Rea
Thomas Reid
Charles Renouvier
Nicholas Rescher
Richard Rorty
Josiah Royce
Bertrand Russell
Paul Russell
Gilbert Ryle
Jean-Paul Sartre
Kenneth Sayre
Moritz Schlick
John Duns Scotus
John Searle
Wilfrid Sellars
David Shiang
Alan Sidelle
Ted Sider
Henry Sidgwick
Peter Slezak
Saul Smilansky
Michael Smith
Baruch Spinoza
L. Susan Stebbing
Isabelle Stengers
George F. Stout
Galen Strawson
Peter Strawson
Eleonore Stump
Francisco Suárez
Richard Taylor
Kevin Timpe
Mark Twain
Peter Unger
Peter van Inwagen
Manuel Vargas
John Venn
Kadri Vihvelin
G.H. von Wright
David Foster
R. Jay Wallace
Ted Warfield
Roy Weatherford
C.F. von
William Whewell
Alfred North
David Widerker
David Wiggins
Bernard Williams
Susan Wolf
David Albert
Michael Arbib
Walter Baade
Bernard Baars
Jeffrey Bada
Leslie Ballentine
Marcello Barbieri
Gregory Bateson
Horace Barlow
John S. Bell
Mara Beller
Charles Bennett
Ludwig von
Susan Blackmore
Margaret Boden
David Bohm
Niels Bohr
Ludwig Boltzmann
Emile Borel
Max Born
Satyendra Nath
Walther Bothe
Jean Bricmont
Hans Briegel
Leon Brillouin
Stephen Brush
Henry Thomas
S. H. Burbury
Melvin Calvin
Donald Campbell
Sadi Carnot
Anthony Cashmore
Eric Chaisson
Gregory Chaitin
Rudolf Clausius
Arthur Holly
John Conway
Jerry Coyne
John Cramer
Francis Crick
E. P. Culverwell
Antonio Damasio
Olivier Darrigol
Charles Darwin
Richard Dawkins
Terrence Deacon
Lüder Deecke
Richard Dedekind
Louis de Broglie
Stanislas Dehaene
Max Delbrück
Abraham de Moivre
Paul Dirac
Hans Driesch
John Eccles
Arthur Stanley
Gerald Edelman
Paul Ehrenfest
Manfred Eigen
Albert Einstein
George F. R.
Hugh Everett, III
Franz Exner
Richard Feynman
R. A. Fisher
David Foster
Joseph Fourier
Philipp Frank
Steven Frautschi
Edward Fredkin
Benjamin Gal-Or
Howard Gardner
Lila Gatlin
Michael Gazzaniga
J. Willard Gibbs
James J. Gibson
Nicolas Gisin
Paul Glimcher
Thomas Gold
A. O. Gomes
Brian Goodwin
Joshua Greene
Dirk ter Haar
Jacques Hadamard
Mark Hadley
Patrick Haggard
J. B. S. Haldane
Stuart Hameroff
Augustin Hamon
Sam Harris
Ralph Hartley
Hyman Hartman
Jeff Hawkins
John-Dylan Haynes
Donald Hebb
Martin Heisenberg
Werner Heisenberg
John Herschel
Basil Hiley
Art Hobson
Jesper Hoffmeyer
Don Howard
John H. Jackson
William Stanley
Roman Jakobson
E. T. Jaynes
Pascual Jordan
Eric Kandel
Ruth E. Kastner
Stuart Kauffman
Martin J. Klein
William R. Klemm
Christof Koch
Simon Kochen
Hans Kornhuber
Stephen Kosslyn
Daniel Koshland
Ladislav Kovàč
Leopold Kronecker
Rolf Landauer
Alfred Landé
Karl Lashley
David Layzer
Joseph LeDoux
Gerald Lettvin
Gilbert Lewis
Benjamin Libet
David Lindley
Seth Lloyd
Hendrik Lorentz
Josef Loschmidt
Alfred Lotka
Ernst Mach
Donald MacKay
Henry Margenau
Owen Maroney
David Marr
Humberto Maturana
James Clerk
Ernst Mayr
John McCarthy
Warren McCulloch
N. David Mermin
George Miller
Stanley Miller
Ulrich Mohrhoff
Jacques Monod
Emmy Noether
Donald Norman
Alexander Oparin
Abraham Pais
Howard Pattee
Wolfgang Pauli
Massimo Pauri
Wilder Penfield
Roger Penrose
Steven Pinker
Colin Pittendrigh
Walter Pitts
Max Planck
Susan Pockett
Henri Poincaré
Daniel Pollen
Ilya Prigogine
Hans Primas
Zenon Pylyshyn
Henry Quastler
Adolphe Quételet
Pasco Rakic
Nicolas Rashevsky
Lord Rayleigh
Frederick Reif
Jürgen Renn
Giacomo Rizzolati
A.A. Roback
Emil Roduner
Juan Roederer
Jerome Rothstein
David Ruelle
David Rumelhart
Robert Sapolsky
Tilman Sauer
Ferdinand de
Erwin Schrödinger
Aaron Schurger
Sebastian Seung
Thomas Sebeok
Franco Selleri
Claude Shannon
Abner Shimony
Herbert Simon
Dean Keith
Edmund Sinnott
B. F. Skinner
Lee Smolin
Ray Solomonoff
Roger Sperry
John Stachel
Henry Stapp
Tom Stonier
Antoine Suarez
Leo Szilard
Max Tegmark
Teilhard de
Libb Thims
William Thomson
Richard Tolman
Giulio Tononi
Peter Tse
Alan Turing
C. S.
Francisco Varela
Vlatko Vedral
Heinz von
Richard von Mises
John von Neumann
Jakob von Uexküll
C. H. Waddington
John B. Watson
Daniel Wegner
Steven Weinberg
Paul A. Weiss
Herman Weyl
John Wheeler
Jeffrey Wicken
Wilhelm Wien
Norbert Wiener
Eugene Wigner
E. O. Wilson
Günther Witzany
Stephen Wolfram
H. Dieter Zeh
Semir Zeki
Ernst Zermelo
Wojciech Zurek
Konrad Zuse
Fritz Zwicky
Free Will
Mental Causation
James Symposium
A theorem, easy to prove, tells us that a bounded world, governed only by the laws of mechanics, will always pass through a state very close to its initial state. On the other hand, according to
accepted experimental laws (if one attributes absolute validity to them, and if one is willing to press their consequences to the extreme), the universe tends toward a certain final state, from which
it will never depart. In this final state, which will be a kind of death, all bodies will be at rest at the same temperature. I do not know if it has been remarked that the English kinetic theories
can extricate themselves from this contradiction. The world, according to them, tends at first toward a state where it remains for a long time without apparent change; and this is consistent with
experience; but it does not remain that way forever, if the theorem cited above is not violated; it merely stays there for an enormously long time, a time which is longer the more numerous are the
molecules. This state will not be the final death of the universe, but a sort of slumber, from which it will awake after millions of millions of centuries. Poincaré's "little patience" would be
severely tried by Boltzmann's calculation that even a small number of particles would not recur in his "millions and millions of centuries" According to this theory, to see heat pass from a cold body
to a warm one, it will not be necessary to have the acute vision, the intelligence, and the dexterity of Maxwell's demon; it will suffice to have a little patience. One would like to be able to stop
at this point and hope that some day the telescope will show us a world in the process of waking up, where the laws of thermodynamics are reversed.
Poincaré's "little patience" would be severely tried by Boltzmann's calculation that even a small number of particles would not recur in his "millions and millions of centuries"
By accepting the theory of the expanding universe we are relieved of one conclusion which we had felt to be intrinsically absurd. It was argued that every possible configuration of atoms must repeat
itself at some distant date. But that was on the assumption that the atoms will have only the same choice of configurations in the future that they have now. In an expanding space any particular
congruence becomes more and more improbable. The expansion of the universe creates new possibilities of distribution faster than the atoms can work through them, and there is no longer any likelihood
of a particular distribution being repeated. If we continue shuffling a pack of cards we are bound sometime to bring them into their standard order — but not if the conditions are that every morning
one more card is added to the pack.
Another argument against the statistical interpretation of irreversibility, the recurrence objection (or Wiederkehreinwand), was raised much later by Ernst Friedrich Zermelo, a collaborator of Max
Planck at a time when the latter still opposed atomism, and instead supported the 'energeticists', who attempted to understand energy and entropy as fundamental 'substances'. This argument is based
on a mathematical theorem due to Henri Poincaré, which states that every bounded mechanical system will return as close as one wishes to its initial state within a sufficiently large time. The
entropy of a closed system would therefore have to return to its former value, provided only the function F(z) is continuous. This is a special case of the quasiergodic theorem which asserts that
every system will corne arbitrarily close to any point on the hypersurface of fixed energy (and possibly with other fixed analytical constants of the motion) within finite time. While all these
theorems are mathematically correct, the recurrence objection fails to apply to reality for quantitative reasons. The age of our Universe is much smaller than the Poincaré recurrence times even for a
gas consisting of no more than a few tens of particles. Their recurrence to the vicinity of their initial states (or their coming close to any other similarly specific state) can therefore be
excluded in practice. Nonetheless, some 'foundations' of irreversible thermodynamics in the literature rely on formal idealizations that would lead to strictly infinite Poincaré recurrence times (for
example the 'thermodynamical limit' of infinite particle number). Such assumptions are not required in our Universe of finite age, and they would not invalidate the reversibility objection (or the
equilibrium expectation, mentioned above). However, all foundations of irreversible behavior have to presume some very improbable initial conditions... In order to reverse the thermodynamical arrow
of time in a bounded system, it would not therefore suffice to "go ahead and reverse all momenta" in the system itself, as ironically suggested by Boltzmann as an answer to Loschmidt. In an
interacting Laplacean universe, the Poincaré cycles of its subsystems could in general only be those of the whole Universe, since their exact Hamiltonians must always depend on their time-dependent
Chapter 5.6 - Mind-Body Problem Chapter 5.8 - Reversibility
Part Four - Freedom Part Six - Solutions | {"url":"https://informationphilosopher.com/problems/recurrence/","timestamp":"2024-11-04T02:39:33Z","content_type":"text/html","content_length":"107287","record_id":"<urn:uuid:817cd8bf-c9af-46a6-9f59-4e5033ab46a1>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00383.warc.gz"} |
A restricted Boltzmann machine is a generative probabilistic graphic network. A probability of finding the network in a certain configuration is given by the Boltzmann distribution. Given training
data, its learning is done by optimizing the parameters of the energy function of the network. In this paper, we analyze the training process of the restricted Boltzmann machine in the context of
statistical physics. As an illustration, for small size bar-and-stripe patterns, we calculate thermodynamic quantities such as entropy, free energy, and internal energy as a function of the training
epoch. We demonstrate the growth of the correlation between the visible and hidden layers via the subadditivity of entropies as the training proceeds. Using the Monte-Carlo simulation of trajectories
of the visible and hidden vectors in the configuration space, we also calculate the distribution of the work done on the restricted Boltzmann machine by switching the parameters of the energy
function. We discuss the Jarzynski equality which connects the path average of the exponential function of the work and the difference in free energies before and after training
This paper describes a parallel implementation of the discontinuous Galerkin method. Discontinuous Galerkin is a spatially compact method that retains its accuracy and robustness on non-smooth
unstructured grids and is well suited for time dependent simulations. Several parallelization approaches are studied and evaluated. The most natural and symmetric of the approaches has been
implemented in an object-oriented code used to simulate aeroacoustic scattering. The parallel implementation is MPI-based and has been tested on various parallel platforms such as the SGI Origin, IBM
SP2, and clusters of SGI and Sun workstations. The scalability results presented for the SGI Origin show slightly superlinear speedup on a fixed-size problem due to cache effects
under Contract NAS 1-97046 Available from the following
Abstract A multi-modal transportation system of a city can be modeled as a multiplex network with different layers corresponding to different transportation modes. These layers include, but are not
limited to, bus network, metro network, and road network. Formally, a multiplex network is a multilayer graph in which the same set of nodes are connected by different types of relationships.
Intra-layer relationships denote the road segments connecting stations of the same transportation mode, whereas inter-layer relationships represent connections between different transportation modes
within the same station. Given a multi-modal transportation system of a city, we are interested in assessing its quality or efficiency by estimating the coverage i.e., a portion of the city that can
be covered by a random walker who navigates through it within a given time budget, or steps. We are also interested in the robustness of the whole transportation system which denotes the degree to
which the system is able to withstand a random or targeted failure affecting one or more parts of it. Previous approaches proposed a mathematical framework to numerically compute the coverage in
multiplex networks. However solutions are usually based on eigenvalue decomposition, known to be time consuming and hard to obtain in the case of large systems. In this work, we propose MUME, an
efficient algorithm for Multi-modal Urban Mobility Estimation, that takes advantage of the special structure of the supra-Laplacian matrix of the transportation multiplex, to compute the coverage of
the system. We conduct a comprehensive series of experiments to demonstrate the effectiveness and efficiency of MUME on both synthetic and real transportation networks of various cities such as
Paris, London, New York and Chicago. A future goal is to use this experience to make projections for a fast growing city like Doha
A computational aeroacoustics code based on the discontinuous Galerkin method is ported to several parallel platforms using MPI. The discontinuous Galerkin method is a compact high-order method that
retains its accuracy and robustness on non-smooth unstructured meshes. In its semi-discrete form, the discontinuous Galerkin method can be combined with explicit time marching methods making it well
suited to time accurate computations. The compact nature of the discontinuous Galerkin method also makes it well suited for distributed memory parallel platforms. The original serial code was written
using an objectoriented approach and was previously optimized for cache-based machines. The port to parallel platforms was achieved simply by treating partition boundaries as a type of boundary
condition. Code modifications were minimal because boundary conditions were abstractions in the original program. Scalability results are presented for the SGI Origin, IBM SP2, and clusters of SGI
and Sun workstations. Slightly superlinear speedup is achieved on a fixed-size problem on the Origin, due to cache effects
A multi-modal transportation system of a city can be modeled as a multiplex network with different layers corresponding to different transportation modes. These layers include, but are not limited
to, bus network, metro network, and road network. Formally, a multiplex network is a multilayer graph in which the same set of nodes are connected by different types of relationships. Intra-layer
relationships denote the road segments connecting stations of the same transportation mode, whereas inter-layer relationships represent connections between different transportation modes within the
same station. Given a multi-modal transportation system of a city, we are interested in assessing its quality or efficiency by estimating the coverage i.e., a portion of the city that can be covered
by a random walker who navigates through it within a given time budget, or steps. We are also interested in the robustness of the whole transportation system which denotes the degree to which the
system is able to withstand a random or targeted failure affecting one or more parts of it. Previous approaches proposed a mathematical framework to numerically compute the coverage in multiplex
networks. However solutions are usually based on eigenvalue decomposition, known to be time consuming and hard to obtain in the case of large systems. In this work, we propose MUME, an efficient
algorithm for Multi-modal Urban Mobility Estimation, that takes advantage of the special structure of the supra-Laplacian matrix of the transportation multiplex, to compute the coverage of the
system. We conduct a comprehensive series of experiments to demonstrate the effectiveness and efficiency of MUME on both synthetic and real transportation networks of various cities such as Paris,
London, New York and Chicago. A future goal is to use this experience to make projections for a fast growing city like Doha.Other Information Published in: EPJ Data Science License: https://
creativecommons.org/licenses/by/4.0See article on publisher's website: http://dx.doi.org/10.1140/epjds/s13688-018-0139-7</p
<p>Code and datasets S1 and S2 used in the paper <strong>ClustMe: A Visual Quality Measure for Ranking Monochrome Scatterplots based on Cluster Patterns.</strong> Computer Graphics
Forum 38(3): 225-236 (2019) and to appear in <strong>ClustML: A Measure of Cluster Pattern Complexity in Scatterplots Learnt from Human-labeled Groupings</strong>, SAGE Information
Visualization Journal.</p><p>Code is written with R4.3.1 language. Data are stored in RData, images and csv formats.</p><p>CONTENT:</p><ul><li>/
_1_TRAINING_MERGER_ON_GMM_PARAMETERS_S1</li></ul><p>Pipeline used to train all CARET ML models to train and find the best merger used in ClustML.</p><p>These functions
use data S1. Refer to README.txt file therein </p><ul><li>/_2_ClustMe_vs_ClustML_257data_S2</li></ul><p>Run the script CompareClustMLvsClustMe_Data257.R to
plot the comparative scatterplot of ClustMe and ClustML scores.</p><ul><li>/_3_USAGE_SCENARIO_GENOMICS</li></ul><p>Check the script to set options, then run:
run_analysis_of_genomic_data_with_ClustML.R</p><p>Process Thousand genome project data (coming as PCA from IBD pairs stored in PCA_of_genomic_data.RData)</p><p>Compute plots
for the usage scenario and summary plot of statistics of all scatterplots based on pairs of PCA.</p><p>Compute the interactive plot for selecting clusters and highlight them in another
scatterplot.</p><ul><li>/CLUSTML_VQM </li></ul><p>Contains the main ClustML function (ClustML_Pipeline() in ClustML_VQM.R) to compute a GMM over scatterplot
(x,y) data and compute the ClustML score. It uses treebag_up_PP_PCA_BoxCox_SpatialSign.RData is a CARET classification model to take merging pairwise decisions. This model is the best obtained by
training on 2-component GMM evaluated for containing one or more-than-on cluster by 34 human subjects.</p><ul><li>/DATASETS</li></ul><p>Contains datasets from
study S1 and S2, with ClustML (CARET model) results and human judgments.</p><p>Scatterplot stimuli can be plot using "plotSP" function from plotDataXY.R (see example in that code)</p&
gt;<ul><li>./DATA_S1_ORIGINAL_PARAMETER_JUDGEMENT_DATA</li></ul><p>1000_2gaussians_param_34judgment_ClustMe_EXP1.csv contains 34 human judgments of each of 1000s
2-component GMM scatterplots and the 8 parameters used to generate a sample from these GMM models.</p><p>"XYposCSVfilename": name of the file in ../
DATA_S1_ORIGINAL_Scatterplots_IMG_ClustMe</p><p>"Nsample": sample size generated from the GMM = number of points in the scatterplot.</p><p>"MuA1","MuA2": mean along axes 1 and
2 of component A of the GMM</p><p>"SigmaA1","SigmaA2": variance along axes 1 and 2 of component A of the GMM </p><p>"ThetaA": angle of the component A of the GMM</p&
gt;<p>"MuB1","MuB2": mean along axes 1 and 2 of component B of the GMM</p><p>"SigmaB1","SigmaB2": variance along axes 1 and 2 of component B of the GMM</p><p>"ThetaB":
angle of the component B of the GMM</p><p>"Tau": proportion of component A</p><p>"Alpha": rotation from horizontal of the full mixture</p><p>
"Score_1",...,"Score_34": Human judgment (1 = see one cluster, 2 = see more-than-one cluster)</p><p>"probMore","probSingle": proportion of judgments seeing more-than-one/one clusters</
p><ul><li>./DATA_S1_ORIGINAL_Scatterplots_IMG_ClustMe</li></ul><p>png image files stimuli shown to the human subjects, and whose filename is used in ../
><p>zzzz.csv file containing x and y coordinates of points displayed in file zzzz.png stored in folder ../DATA_S1_ORIGINAL_Scatterplots_IMG_ClustMe</p><ul><li>./DATA_S2<
/li></ul><p>Data used in Study S2</p><p>Data_257.RData: contains list of filenames and x,y positions of points of the scatterplot stimuli</p><p>
Data257_435pairwiseRanking_CARETmodels.csv /.RData rankings are given by ClustML using various CARET models as merging classifiers trained on S1 data.</p><p>
Data257_435pairwiseRanking_31HumanJudgments.csv /.RData ranking given by 31 human judgments</p><p>The row name is filename1@@@@@filename2, where filename1 and 2 correspond to names in
Data_257</p><p>Each cell contains the filename judged by the column header model/subject, as showing the most complex cluster patterns, BOTH if they are both judged of similar complexity.
</p><ul><li>/DEMO</li></ul><p>Run Demo_ClustML_VQM.R to demonstrate how to use the ClustML_Pipeline function to compute the ClustML score of a scatterplot.</p& | {"url":"https://core.ac.uk/search/?q=author%3A(Abdelkader%20Baggag)","timestamp":"2024-11-14T21:16:20Z","content_type":"text/html","content_length":"119645","record_id":"<urn:uuid:5b7e1209-8467-44f6-851a-73f03a46160c>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00262.warc.gz"} |
In the final installment of this series, I want to discuss how we can use the Ratios of Risk in a clinical context. To recap, we previously discussed an absolute measure of risk difference
(appropriately called the risk difference or RD), as well as a relative measure of risk difference (relative risk or RR).
To see how we can apply these risks, let’s tweak our original example. Let’s assume that smartphone thumb could potentially lead to loss of thumb function (not really, don’t worry!). Let’s also
suppose that surgery is a possible treatment for smartphone thumb, and the following results were obtained after a trial.
│ │Surgery│No Surgery (control) │Totals│
│Retained │7 │6 │13 │
│ │ │ │ │
│thumb function │ │ │ │
│Lost thumb function│3 │4 │7 │
│Totals │10 │10 │20 │
The big question is: how good an option is surgery?
Let’s calculate the RD (note that the “risk” here is of losing thumb function): 4/10 – 3/10= 0.1
In other words, there is a 10% greater risk of loosing thumb function if you did not have the surgery. Based on this information alone (or by calculating the RR and OR), we might be quick to conclude
that surgery is a great intervention.
But before we do that, let’s calculate another statistic, which will prove to be very useful: it’s called the number needed to treat (or NNT), and is given by 1/RD. The NNT is the number of patients
that must be treated for 1 additional patient to derive some benefit (retain an intact and functioning thumb). In our case, NNT = 1/0.1 = 10. So, in order save 1 patient from loosing his thumb,
another 9 will have had to undergo surgery with no apparent benefit. As you can see, the NNT sheds a very humbling light on our intervention. The ideal NNT is equal to 1. Beyond that, we must keep in
mind that the additional patients undergoing the treatment have been exposed to all the negative side effects, without the intended benefit.
Throughout this series we discussed the meaning of risk, how it can be used for comparison (the various ratios of risk), and finally its application in a clinical setting (the ramifications of risk).
After all these posts, smartphone thumb may have started to seem like a very real threat. But I think you should be fine…. as long as you know the risks!
So what’s up with the Dodge Ram ad (I am actually a F150 guy myself)? Well I just thought it went well with ramifications of risk. Cheesy I know. But who knows maybe it will help you to remember…
See you in the blogosphere,
Indranil Balki and Pascal Tyrrell | {"url":"https://www.tyrrell4innovation.ca/tag/ram/","timestamp":"2024-11-05T07:28:52Z","content_type":"text/html","content_length":"102151","record_id":"<urn:uuid:2053f324-d85c-46b1-b9e2-51486e8fe2bd>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00481.warc.gz"} |
Convert micrometers to nautical miles ( um to nmi )
Last Updated: 2024-11-06 07:00:11 , Total Usage: 1173138
Converting micrometers to nautical miles juxtaposes a unit commonly used in microscopic measurements with a unit used in maritime and aerial navigation. This type of conversion is a prime example of
how different measurement systems are used across various fields and scales.
Historical or Origin
Micrometers (µm): A micrometer, or micron, is a metric unit of length, equivalent to one-millionth of a meter. It is widely used in scientific and engineering fields for precise measurements at the
Nautical Miles (nmi): A nautical mile is a unit of distance used in maritime and aerial navigation. Historically based on the Earth's circumference, it is now defined as exactly 1,852 meters.
Nautical miles are used because they correspond more closely to a minute of latitude, making them suitable for charting and navigating the globe.
Calculation Formula
The formula to convert micrometers to nautical miles is:
\[ \text{Nautical Miles} = \text{Micrometers} \times \text{Conversion Factor} \]
The conversion factor from micrometers to nautical miles is approximately \(5.39957 \times 10^{-10}\), since one nautical mile equals 1,852,000,000 micrometers.
Example Calculation
For example, if you want to convert 10,000,000 micrometers to nautical miles, the calculation would be:
\[ \text{Nautical Miles} = 10,000,000 \times 5.39957 \times 10^{-10} \approx 0.00539957 \text{ nmi} \]
Why It's Needed and Use Cases
This conversion is rarely used in practical applications but can be important in certain scientific and technical contexts where distances need to be translated from microscopic scales (measured in
micrometers) to navigational scales (measured in nautical miles). It's also an educational tool for understanding the vast range of measurement units.
Common Questions (FAQ)
• Why are micrometers and nautical miles important? Micrometers are essential for precise microscale measurements, while nautical miles are crucial in navigation due to their relationship with the
Earth's geography.
• Is this conversion precise? The conversion is mathematically precise, based on the defined lengths of a micrometer and a nautical mile.
• How often is this conversion used? Converting micrometers to nautical miles is more of an academic exercise than a frequent practical necessity.
In summary, converting micrometers to nautical miles is an excellent example of the wide spectrum of measurement units used in different fields, from microscopic science to global navigation. While
not commonly needed in everyday scenarios, it showcases the versatility and range of the metric system and the imperial system used in navigation. | {"url":"https://calculator.fans/en/tool/um-to-nmi-convertor.html","timestamp":"2024-11-06T17:44:12Z","content_type":"text/html","content_length":"12761","record_id":"<urn:uuid:acc23a72-bd57-4ae5-8424-ebc242095843>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00043.warc.gz"} |
Instructional Program
Sample tasks from Stage 3, Level 1: Addition to 5, Missing Whole
Support Students Who Need Help in Math
The best practices in developmental psychology and cognitive science are the cornerstones of Symphony Math®. Our uniquely designed delivery methods ensure that students – regardless of learning
styles or knowledge levels – fully grasp fundamental mathematical ideas, even for difficult-to-explain and abstract concepts. The result is a solid foundation for acquiring higher math skills, as
well as a positive learning experience. The primary elements in the Symphony Math® approach, which cover Common Core Math standards in grades K-5, include:
Conceptual Sequences of the Most Important Mathematical Ideas
A tightly connected progression forms the conceptual sequence of Symphony Math®. These underlying “big ideas” provide the foundation for mathematical learning. As students master each big idea before
moving on to the next, they learn to succeed with more complicated math later on.
Mathematical Topic Underlying Big Idea
Sequencing, Number Quantity
Addition and Subtraction Parts-to-whole
Place Value Hierarchical grouping
Multiplication and Division Repeated equal grouping
Multi-digit Addition and Subtraction Hierarchical grouping coordinated with parts-to-whole
Fractions Repeated equal grouping coordinated with parts-to-whole
Multiple Ways of Knowing
Six distinct activity environments provide multiple representations of each concept and integrate with the conceptual sequence.
Activity Purpose
Manipulatives Conceptually understand what the concept “looks like”
Manipulatives & Symbols Explicitly connect symbols to visual representations
Symbols Understand concepts at abstract levels
Auditory Sentences Learn the formal language of math
Story Problems Apply learning to real life problem solving
Mastery Round Develop immediate recall of number relationships
Adaptive Learning Engine
Students’ path through the Symphony Math® program is adjusted during each session of use. As students show mastery, they move quickly to new material. When they demonstrate a need for more practice,
they more more deliberately through content, going through all available visual models and moving from concrete to abstract representations.
The dynamic branching of Symphony Math allows students to learn at their own levels. As the program illuminates an area of need, progress slows until the student achieves the necessary understanding.
Students will move in and out of different branching modes as they work through the program. If they are challenged, and remain in a Focus Group for multiple attempts, they will be flagged in the
‘HELP’ data view on your Symphony Dashboard.
Try Some Sample Tasks!
Sequencing (Kindergarten)
Parts to Whole: Addition (Grade 1)
Repeated Addition: Foundations of Multiplication (Grade 2)
Intro to Fractions (Grade 3)
Expanded Mode Multiplication (Grade 4) | {"url":"https://symphonylearning.com/overview/instruction/","timestamp":"2024-11-06T01:48:30Z","content_type":"text/html","content_length":"66957","record_id":"<urn:uuid:4f6949a7-1602-4ffd-855e-d8bceaff4509>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00024.warc.gz"} |
Physics Calculators | List of Physics Calculators
List of Physics Calculators
Physics calculators give you a list of online Physics calculators. A tool perform calculations on the concepts and applications for Physics calculations.
These calculators will be useful for everyone and save time with the complex procedure involved to obtain the calculation results. You can also download, share as well as print the list of Physics
calculators with all the formulas. | {"url":"https://www.calculatoratoz.com/en/physics-Calculators/CalcList-294","timestamp":"2024-11-07T07:48:18Z","content_type":"application/xhtml+xml","content_length":"86387","record_id":"<urn:uuid:f794f694-d1b6-4c3b-9499-3c867aa5b82d>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00320.warc.gz"} |
locally convex topological vector space
I have added to locally convex topological vector space the standard alternative characterization of continuity of linear functionals by a bound for one of the seminorms: here
(proof and/or more canonical reference should still be added).
I have added the definition of directed set of seminorms, here, and the fact that we may replace by the system of maxima over finite inhabited subsets of seminorms, here, and the corresponding
characterization of continuity, in item 2 here.
I suppose I should add that instead of maxima we may also take sums of seminorms?
Added reference to the inductive tensor product and how this makes $lctvs$ into a symmetric closed monoidal category.
diff, v23, current | {"url":"https://nforum.ncatlab.org/discussion/8100/locally-convex-topological-vector-space/","timestamp":"2024-11-12T13:24:57Z","content_type":"application/xhtml+xml","content_length":"16831","record_id":"<urn:uuid:08c0e513-fdab-48ff-abc0-92c3465e8e51>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00198.warc.gz"} |
07-02-2020 10:15 AM
Exponential Family on Statistical Procedures. 10-17-2022 08:04 AM
Exact GEE logistic regression on Statistical Procedures. 10-25-2022 07:30 AM
Re: Exact GEE logistic regression on Statistical Procedures. 10-31-2022 07:35 PM
Wald test Logistic regression on Statistical Procedures. 04-13-2023 08:58 AM
Re: Wald test Logistic regression on Statistical Procedures. 04-17-2023 09:45 AM
longitudinal data on Statistical Procedures. 06-03-2023 04:58 PM
Re: longitudinal data on Statistical Procedures. 06-08-2023 08:19 AM
Test for median difference on Statistical Procedures. 08-22-2023 06:43 PM
Comparing two ROC curves on Statistical Procedures. 09-12-2023 03:54 PM
Re: Comparing two ROC curves on Statistical Procedures. 09-13-2023 10:45 AM
Survival Random Forest on Statistical Procedures. 09-28-2023 05:57 PM
Re: Survival Random Forest on Statistical Procedures. 10-02-2023 08:10 AM
Re: Survival Random Forest on Statistical Procedures. 10-02-2023 12:08 PM
Fraity Model with PhReg on Statistical Procedures. 11-06-2023 03:29 PM
Imputation on Statistical Procedures. 08-17-2024 02:12 PM
Re: Imputation on Statistical Procedures. 08-17-2024 03:34 PM
Re: Imputation on Statistical Procedures. 08-17-2024 05:47 PM
Power interaction on Statistical Procedures. 09-30-2024 01:07 PM
Re: Power interaction on Statistical Procedures. 10-02-2024 08:14 AM
Dear all, I am still struggling to estimate 95% confidence interval for a data set in which we did not observe the event. I have a dummy variable that indicates the event occurrence and the number of
person-years. I did not observe nay event and the number of person-years is equal to 116.42. Then, the incidence rate is zero (0/116.42). But I would like to estimate the confidence interval. I used
Proc Genmod and it gives me a perfect result when events occurred. Whithout events I do not get the confidence intervals.It would be possible to estimate the confidence interval? Regards, Iuri
... View more | {"url":"https://communities.sas.com/t5/user/v2/viewprofilepage/user-id/198343/page/3","timestamp":"2024-11-14T07:38:21Z","content_type":"text/html","content_length":"240463","record_id":"<urn:uuid:b2e6aef9-91c2-4ae6-adce-de15db3fc78f>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00507.warc.gz"} |
2023 AMC 12A Problems/Problem 8
The following problem is from both the 2023 AMC 10A #10 and 2023 AMC 12A #8, so both problems redirect to this page.
Maureen is keeping track of the mean of her quiz scores this semester. If Maureen scores an $11$ on the next quiz, her mean will increase by $1$. If she scores an $11$ on each of the next three
quizzes, her mean will increase by $2$. What is the mean of her quiz scores currently? $\textbf{(A) }4\qquad\textbf{(B) }5\qquad\textbf{(C) }6\qquad\textbf{(D) }7\qquad\textbf{(E) }8$
Solution 1
Let $a$ represent the amount of tests taken previously and $x$ the mean of the scores taken previously.
We can write the following equations:
$\[\frac{ax+11}{a+1}=x+1\qquad (1)\]$$\[\frac{ax+33}{a+3}=x+2\qquad (2)\]$
Multiplying equation $(1)$ by $(a+1)$ and solving, we get: $\[ax+11=ax+a+x+1\]$$\[11=a+x+1\]$$\[a+x=10\qquad (3)\]$
Multiplying equation $(2)$ by $(a+3)$ and solving, we get: $\[ax+33=ax+2a+3x+6\]$$\[33=2a+3x+6\]$$\[2a+3x=27\qquad (4)\]$
Solving the system of equations for $(3)$ and $(4)$, we find that $a=3$ and $x=\boxed{\textbf{(D) }7}$.
~walmartbrian ~Shontai ~andyluo ~megaboy6679
Solution 2 (Variation on Solution 1)
Suppose Maureen took $n$ tests with an average of $m$.
If she takes another test, her new average is $\frac{(nm+11)}{(n+1)}=m+1$
Cross-multiplying: $nm+11=nm+n+m+1$, so $n+m=10$.
If she takes $3$ more tests, her new average is $\frac{(nm+33)}{(n+3)}=m+2$
Cross-multiplying: $nm+33=nm+2n+3m+6$, so $2n+3m=27$.
But $2n+3m$ can also be written as $2(n+m)+m=20+m$. Therefore $m=27-20=\boxed{\textbf{(D) }7}$
~Dilip ~megaboy6679 (latex)
Solution 3 (do this if you are bored)
Let $s$ represent the sum of Maureen's test scores previously and $t$ be the number of scores taken previously.
So, $\frac{s+11}{t+1} = \frac{s}{t}+1$ and $\frac{s+33}{t+3} = \frac{s}{t}+2$
We can use the first equation to write $s$ in terms of $t$.
We then substitute this into the second equation: $\frac{-t^2+10t+33}{t+3} = \frac{-t^2+10t}{t}+2$
From here, we solve for t: multiply both sides by ($t$) and then ($t+3$), combining like terms to get $t^2-3t=0$. Factorize to get $t=0$ or $t=3$, and therefore $t=3$ (makes sense for the problem).
We substitute this to get $s=21$.
Therefore, the solution to the problem is $\frac{21}{3}=$$\boxed{\textbf{(D) }7}$
~milquetoast ~the_eaglercraft_grinder
Solution 4 (Testing Answer Choices)
Let's consider all the answer choices. If the average is $8$, then, we can assume that all her test choices were $8$. We can see that she must have gotten $8$ twice, in order for another score of
$11$ to bring her average up by one. However, adding three $11$'s will not bring her score up to 10. Continuing this process for the answer choices, we see that the answer is $\boxed{\textbf{(D) }7}$
~andliu766 (minor wording edit by mihikamishra)
Solution 5
Let $n$ be the number of existing quizzes. So after one more test, score $11$ has $n+1$ extra points to distribute to $n+1$ quizzes. Also, after three more quizzes, there will be $3(n+1)$ extra
points to distribute to the $n+3$ quizzes. So $3n+3=2(n+3)$. This means $n=3$. $n+1$ extra points means original mean (average) is $7$
Video Solution by Math-X (First understand the problem!!!)
https://youtu.be/GP-DYudh5qU?si=fQ77Xhb7x1EP_Ieh&t=2361 ~Math-X
Video Solution by Power Solve (easy to digest!)
Video Solution (🚀 Just 3 min 🚀)
~Education, the Study of Everything
Video Solution by CosineMethod [🔥Fast and Easy🔥]
Video Solution
~Steven Chen (Professor Chen Education Palace, www.professorchenedu.com)
See Also
The problems on this page are copyrighted by the Mathematical Association of America's American Mathematics Competitions. | {"url":"https://artofproblemsolving.com/wiki/index.php?title=2023_AMC_12A_Problems/Problem_8&oldid=226590","timestamp":"2024-11-02T11:39:16Z","content_type":"text/html","content_length":"64161","record_id":"<urn:uuid:f4e8c95e-2163-4fce-9f13-d2ad79c80047>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00805.warc.gz"} |
(PDF) Do Black Holes have Singularities?
This may be a simplification but it is a very useful one. The Kerr solution can
be used to approximate the field outside a stationary, rotating body with mass
m, angular momentum ma, and radius larger than 2m. The best example is
a fast-rotating neutron star too light to be a black hole. How accurate is this
metric? Probably better than most! If Ris an approximately radial coordinate
then the rotational and Newtonian ”forces” outside the source drop off like R−3
and R−2, respectively3. Clearly, spin is important close in but mass dominates
further out. These are joined by ”pressure” near the centre where the others
vanish. Most, probably all, believe this ”standard model” is nonsingular for
neutron stars4, but not for black holes. Why the difference? The actual density
can even be lower for a very large and fast rotating black hole interior.
Suppose a neutron star is accreting matter, perhaps from an initial super-
nova. The centrifugal force can be comparable to the Newtonian force near the
surface5, but further out there will be a region where it drops away and mass
dominates. It can be comparatively easy to launch a rocket from the surface,
thanks to the slingshot effect; further out it will require a high velocity and/or
acceleration to escape from the star. This intermediate region will gradually
become a no-go zone as the mass increases and the radius decreases, i.e.an
event shell and therefore black hole forms. Why do so many believe that the
star inside must become singular at this moment? Faith, not science! Sixty
years without a proof, but they believe!. Brandon Carter calculated the geodesic
equations inside Kerr. showing that it is possible to travel in any direction be-
tween the central body and the inner horizon.. There is no trapped surface in
this region, just in the event shell between the horizons.
The work of David Robinson and others shows that a real black hole will
have the Kerr solution as a good approximation to its exterior but a physically
realistic, non-vacuum, non-singular interior. Since these objects are also accret-
ing, both horizons of Kerr should be replaced by apparent horizons. As the
black hole stops growing, Kerr is likely to be a closer and closer approximation
outside the inner horizon. The singularity theorems do not demonstrate how
(or if) FALL’s arise in such environments but that of Hawking claims that these
must always form in our universe, given that almost-closed time-like loops do
not.6It is probably true that the existence of FALL’s show that horizons exist
and that these contain black holes. Proving this would be a good result for a
doctoral student. There are indications that these are inevitable. Astronomers
3Calculations by the author used the corrected EIH equations in the late fifties to show
this is accurate for slow moving bodies at large distances (and reasonable elsewhere)
4Outside the Earth centrifugal force plays a minor role but is still important for sending
rockets into space. That is why the launch sites are chosen as close to the equator as possible.
After the initial vertical trajectory they travel east with the Earth’s rotation rather than west
against it.
5If the body rotated too quickly then the surface would disintegrate. This puts a lower
limit on the possible size of the star.
6Hawking originally claimed, when visiting UT for a weekend, that closed loops were the
alternative. I said in a private conversation to Hawking and George Ellis that after thinking
about it over the weekend I could not quite prove this, just that ”almost-closed” loops were
the alternative. Steven subsequently changed his paper to agree with this. A different name
is given in Hawking and Ellis[19] and attributed to me. | {"url":"https://www.researchgate.net/publication/375744216_Do_Black_Holes_have_Singularities","timestamp":"2024-11-02T18:13:13Z","content_type":"text/html","content_length":"765581","record_id":"<urn:uuid:a0ce6a0f-03d8-4493-a144-4b3fc42d7046>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00138.warc.gz"} |
bolasso 0.2.0
• Added a NEWS.md file to track changes to the package.
• bolasso() argument form has been renamed to formula to reflect common naming conventions in R statistical modeling packages.
• predict() and coef() methods are now implemented using future.apply::future_lapply allowing for computing predictions and extracting coefficients in parallel. This may result in slightly worse
performance (due to memory overhead) when the model/prediction data is small but will be significantly faster when e.g. generating predictions on a very large data-set.
• Solved an issue with bolasso() argument formula. The user-supplied value of formula is handled via deparse() which has a default width.cutoff value of 60. This was causing issues with formulas by
splitting them into multi-element character vectors. It has now been set to the maximum value of 500L which will correctly parse all lengths of formulas.
• predict() now forces evaluation of the formula argument in the bolasso() call. This resolves an issue where, if a user passes a formula via a variable, predict() would pass the variable name to
the underlying prediction function as opposed to the actual formula. | {"url":"https://cran.r-project.org/web/packages/bolasso/news/news.html","timestamp":"2024-11-14T07:18:54Z","content_type":"application/xhtml+xml","content_length":"2671","record_id":"<urn:uuid:a1e80ef7-ca2b-461f-bfa9-ecedb2900eee>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00770.warc.gz"} |
Distance x, a provided hypha branches into k hyphae (i.e., precisely k - 1 branching | URAT1 inhibitor urat1inhibitor.com
Distance x, a provided hypha branches into k hyphae (i.e., precisely k – 1 branching events happen), the fpk g satisfy master equations dpk = – 1 k-1 – kpk . dx Solving these equations utilizing
typical approaches (SI Text), we discover that the likelihood of a pair of nuclei ending up in unique hyphal strategies is pmix two – two =6 0:355, because the number of recommendations goes to
infinity. Numerical simulations on randomly branching colonies using a biologically relevant number of recommendations (SI Text and Fig. 4C,”random”) give pmix = 0:368, pretty close to this
asymptotic worth. It follows that in randomly branching networks, nearly two-thirds of sibling nuclei are delivered for the exact same hyphal tip, instead of becoming separated inside the colony.
Hyphal branching patterns is often optimized to increase the mixing probability, but only by 25 . To compute the maximal mixing probability for any hyphal network using a given biomass we fixed the x
locations of your branch points but in lieu of permitting hyphae to branch randomly, we assigned branches to hyphae to maximize pmix . Suppose that the total quantity of suggestions is N (i.e., N – 1
branching events) and that at some station within the colony thereP m branch hyphae, with all the ith branch feeding into ni are tips m ni = N Then the likelihood of two nuclei from a rani=1 P1 1
domly chosen hypha arriving at the identical tip is m ni . The harmonic-mean arithmetric-mean mTORC1 Activator drug inequality offers that this likelihood is minimized by taking ni = N=m, i.e., if
each hypha feeds in to the similar variety of tips. However, can ideas be evenlyRoper et al.distributed between hyphae at every stage within the branching hierarchy We searched numerically for the
sequence of branches to maximize pmix (SI Text). Surprisingly, we discovered that maximal mixing constrains only the lengths on the tip hyphae: Our numerical optimization algorithm located lots of
networks with extremely dissimilar topologies, but they, by possessing similar distributions of tip lengths, had close to identical values for pmix (Fig. 4C, “optimal,” SI Text, and Fig. S7). The
probability of two nuclei ending up at unique suggestions is pmix = 0:five in the limit of a large variety of recommendations (SI Text) and to get a network with a biologically appropriate number of
tips, we compute pmix = 0:459. Optimization of branching consequently increases the likelihood of sibling nuclei getting separated inside the colony by 25 more than a random network. In actual N.
crassa cells, we found that the flow price in each hypha is directly proportional to the number of guidelines that it feeds (Fig. 4B, Inset); this is constant with SIK3 Inhibitor Storage & Stability
conservation of flow at every hyphal branch point–if tip hyphae have comparable growth rates and dimensions, viz. exactly the same flow rate Q, then a hypha that feeds N ideas may have flow rate NQ.
Thus, from flow-rate measurements we are able to ascertain the position of each hypha within the branching hierarchy. We checked irrespective of whether real fungal networks obey the exact same
branching rules as theoretically optimal networks by generating a histogram of your relative abundances of hyphae feeding 1, 2, . . . suggestions. Even for colonies of extremely different ages the
branching hierarchy for genuine colonies matches really precisely the optimal hyphal branching, in distinct by having a substantially smaller sized fraction of hyphae feeding amongst 1 and three tips
than a randomly branching network (Fig. 4D).PNAS | August 6, 2013 | vol. 110 | no. 32 |MICROBIOLOGYAPPLIED MATHEMATICSAdistance traveled (mm)25 20 15 ten five 0 0 two 4 time (hrs)0.1 0.08 0.06 0.04
0.B2 3 six three 9 2 m3/s )one hundred 0Crandom10D0.six relative freq 0.four. | {"url":"https://www.urat1inhibitor.com/2023/08/09/25222/","timestamp":"2024-11-04T04:47:06Z","content_type":"text/html","content_length":"45029","record_id":"<urn:uuid:35abeeae-ced8-446c-acd4-01e5e9b51253>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00327.warc.gz"} |
Physics IGCSE: Electricity & Magnetism - Lawra Academy
Current Status
Not Enrolled
Get Started
Physics Cambridge IGCSE Course – Code 0625 and 0972: Topic 11 Electricity and Topic 12 Magnetism
Course Description:
Welcome to our Physics Cambridge IGCSE course, meticulously tailored to help you conquer the intricacies of Topics 11 and 12: Electricity and Magnetism. This comprehensive course is specifically
designed for both Code 0625 and Code 0972 syllabi, ensuring you’re well-prepared for the IGCSE Physics examination.
Course Highlights:
• Comprehensive Core and Supplement Objectives: Master all core and supplement objectives as outlined in the syllabus for theory (papers 1, 2, 3, 4). These objectives provide a solid foundation for
exam success.
• Thoroughly Explained Experiments: Gain a deep understanding of practical experiments with detailed explanations. Notes summarize the method, results, interpretation, evaluation, reliability, and
conclusion, enabling you to excel in paper 6 questions.
Course Outline & Objectives (Core and Supplement):
1. Electric Charge (Lesson 1): Key objectives include:
• State that there are positive and negative charges.
• State that positive charges repel other positive charges, negative charges repel other negative charges, but positive charges attract negative charges.
• Describe simple experiments to show the production of electrostatic charges by friction and to show the detection of electrostatic charges.
• Explain that charging of solids by friction involves only a transfer of negative charge (electrons).
• Describe an experiment to distinguish between electrical conductors and insulators.
• Recall and use a simple electron model to explain the difference between electrical conductors and insulators and give typical examples.
• State that charge is measured in coulombs.
• Describe an electric field as a region in which an electric charge experiences a force.
• State that the direction of an electric field at a point is the direction of the force on a positive charge at that point.
• Describe simple electric field patterns, including the direction of the field:
(a) around a point charge
(b) around a charged conducting sphere
(c) between two oppositely charged parallel conducting plates (end effects will not be examined)
2. Electric Current (Lesson 2): Key objectives include:
• Know that electric current is related to the flow of charge.
• Describe the use of ammeters (analogue and digital) with different ranges.
• Describe electrical conduction in metals in terms of the movement of free electrons.
• Know the difference between direct current (d.c.) and alternating current (a.c.).
• Define electric current as the charge passing a point per unit time; recall and use the equation
• State that conventional current is from positive o negative and that the flow of free electrons is from negative to positive.
• Define electromotive force (e.m.f.) as the electrical work done by a source in moving a unit charge around a complete circuit.
• Know that e.m.f. is measured in volts (V).
• Define potential difference (p.d.) as the work done by a unit charge passing through a component.
• Know that the p.d. between two points is measured in volts (V).
• Describe the use of voltmeters (analogue and digital) with different ranges.
• Recall and use the equation for e.m.f.
• Recall and use the equation for p.d.
• Recall and use the equation for resistance
• Recall and use the following relationship for a metallic electrical conductor:
(a) resistance is directly proportional to length
(b) resistance is inversely proportional to cross-sectional area.
• State, qualitatively, the relationship of the resistance of a metallic wire to its length and to its cross-sectional area.
• Understand that electric circuits transfer energy from a source of electrical energy, such as an electrical cell or mains supply, to the circuit components and then into the surroundings.
3. Electric circuits (Lesson 3): Key objectives include:
• Know that the current at every point in a series circuit is the same.
• Know how to construct and use series and parallel circuits.
• Calculate the combined e.m.f. of several sources in series.
• Calculate the combined resistance of two or more resistors in series.
• State that, for a parallel circuit, the current from the source is larger than the current in each branch.
• State that the combined resistance of two resistors in parallel is less than that of either resistor by itself.
• State the advantages of connecting lamps in parallel in a lighting circuit.
• Recall and use in calculations, the fact that:
(a) the sum of the currents entering a junction in a parallel circuit is equal to the sum of the currents that leave the junction
(b) the total p.d. across the components in a series circuit is equal to the sum of the individual p.d.s across each component
(c) the p.d. across an arrangement of parallel resistances is the same as the p.d. across one branch in the arrangement of the parallel resistances.
• Explain that the sum of the currents into a junction is the same as the sum of the currents out of the junction.
• Calculate the combined resistance of two resistors in parallel.
• Know that the p.d. across an electrical conductor increases as its resistance increases for a constant current.
• Describe the action of a variable potential divider.
• Recall and use the equation for two resistors used as a potential divider
• Draw and interpret circuit diagrams containing cells, batteries, power supplies, generators, potential dividers, switches, resistors (fixed and variable), heaters, thermistors (NTC only),
light-dependent resistors (LDRs), lamps, motors, ammeters, voltmeters, magnetising coils, transformers, fuses and relays, and know how these components behave in the circuit.
• Describe an experiment to determine resistance using a voltmeter and an ammeter and do the
appropriate calculations.
• Draw and interpret circuit diagrams containing diodes and light-emitting diodes (LEDs), and know how these components behave in the circuit.
• Sketch and explain the current–voltage graphs for a resistor of constant resistance, a filament lamp and a diode.
• Recall and use the equation for electrical power
• Recall and use the equation for electrical energy.
• Define the kilowatt-hour (kW h) and calculate the cost of using electrical appliances where the energy unit is the kW h.
4. Electrical safety (Lesson 4): Key objectives include:
(a) damaged insulation
(b) overheating cables
(c) damp conditions
(d) excess current from overloading of plugs, extension leads, single and multiple sockets when using a mains supply.
• Know that a mains circuit consists of a live wire (line wire), a neutral wire and an earth wire and explain why a switch must be connected to the live wire for the circuit to be switched off
• Explain the use and operation of trip switches and fuses and choose appropriate fuse ratings and trip switch settings.
• Explain why the outer casing of an electrical appliance must be either non-conducting (double-insulated) or earthed.
• State that a fuse without an earth wire protects the circuit and the cabling for a double-insulated
2. Simple phenomena of magnetism (Lesson 1): Key objectives include:
• Describe the forces between magnetic poles and between magnets and magnetic materials, including the use of the terms north pole (N pole), south pole (S pole), attraction and repulsion,
magnetized and unmagnetized.
• Describe induced magnetism.
• State the differences between the properties of temporary magnets (made of soft iron) and the properties of permanent magnets (made of steel).
• State the difference between magnetic and non-magnetic materials.
• Describe a magnetic field as a region in which a magnetic pole experiences a force.
• Draw the pattern and direction of magnetic field lines around a bar magnet.
• State that the direction of a magnetic field at a point is the direction of the force on the N pole of a magnet at that point.
• Describe the plotting of magnetic field lines with a compass or iron filings and the use of a compass to determine the direction of the magnetic field.
• Explain that magnetic forces are due to interactions between magnetic fields.
• Know that the relative strength of a magnetic field is represented by the spacing of the magnetic field lines.
4. Electromagnets (Lesson 2): Key objectives include:
• Describe the pattern and direction of the magnetic field due to currents in straight wires and in solenoids.
• Describe an experiment to identify the pattern of the magnetic field (including direction) due to currents in straight wires and in solenoids.
• Describe how the magnetic effect of a current is used in relays and loudspeakers and give examples of their application.
• State the qualitative variation of the strength of the magnetic field around straight wires and solenoids.
• Describe the effect on the magnetic field around straight wires and solenoids of changing the magnitude and direction of the current.
• Describe the uses of permanent magnets and electromagnets.
5. Electromagnetic force (Lesson 3): Key objectives include:
• Describe an experiment to show that a force acts on a current-carrying conductor in a magnetic field, including the effect of reversing:
(a) the current
(b) the direction of the field
• Recall and use the relative directions of force, magnetic field and current.
• Determine the direction of the force on beams of charged particles in a magnetic field.
• Know that a current-carrying coil in a magnetic field may experience a turning effect and that the turning effect is increased by increasing:
(a) the number of turns on the coil
(b) the current
(c) the strength of the magnetic field
• Describe the operation of an electric motor, including the action of a split-ring commutator and brushes.
5. Electromagnetic Induction (Lesson 4): Key objectives include:
• Know that a conductor moving across a magnetic field or a changing magnetic field linking with a conductor can induce an e.m.f. in the conductor.
• Describe an experiment to demonstrate electromagnetic induction.
• State the factors affecting the magnitude of an induced e.m.f.
• Know that the direction of an induced e.m.f. opposes the change causing it.
• State and use the relative directions of force, field and induced current.
• Describe a simple form of a.c. generator (rotating coil or rotating magnet) and the use of slip rings and brushes where needed.
• Sketch and interpret graphs of e.m.f. against time or simple a.c. generators and relate the position of the generator coil to the peaks, troughs and zeros of the e.m.f.
6. Transformers (Lesson 5): Key objectives include:
• Describe the construction of a simple transformer with a soft iron core, as used for voltage transformations.
• Use the terms primary, secondary, step-up and step-down.
• Recall and use the equation where p and s refer to primary and secondary
• Describe the use of transformers in high-voltage transmission of electricity.
• State the advantages of high-voltage transmission.
• Explain the principle of operation of a simple iron-cored transformer.
• Recall and use the equation for 100% efficiency in a transformer where p and s refer to primary and secondary
• Recall and use the following equation to explain why power losses in cables are smaller when the voltage is greater.
Course Benefits:
• Engage with meticulously crafted video lessons, providing comprehensive explanations of each lesson and experiment.
• Access downloadable summary study sheets that condense essential information, aiding your pursuit of an A* grade.
• Enhance your preparation with assignments based on past papers to boost your confidence for the exam.
• Become part of a dynamic student group community, where you can interact with fellow learners and the course instructor, asking questions and sharing updates.
Unlock the world of business and prepare to excel in the IGCSE Business examination through our “Physics Cambridge IGCSE Course – 0625 and Code 0972 : “Electricity and Magnetism” course.
Lesson Content
0% Complete 0/3 Steps
You will need:
• Computer or Mobile
• Internet | {"url":"https://lawra-academy.com/courses/physics-igcse-electricity-magnetism/","timestamp":"2024-11-04T07:38:22Z","content_type":"text/html","content_length":"320231","record_id":"<urn:uuid:dc7611cf-6816-464f-8207-7159175f19e4>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00713.warc.gz"} |
Yet Another Haskell Tutorial/Monads - Wikibooks, open books for an open world
The most difficult concept to master while learning Haskell is that of understanding and using monads. We can distinguish two subcomponents here: (1) learning how to use existing monads and (2)
learning how to write new ones. If you want to use Haskell, you must learn to use existing monads. On the other hand, you will only need to learn to write your own monads if you want to become a
"super Haskell guru." Still, if you can grasp writing your own monads, programming in Haskell will be much more pleasant.
So far we've seen two uses of monads. The first use was IO actions: We've seen that, by using monads, we can get away from the problems plaguing the RealWorld solution to IO presented in the chapter
IO. The second use was representing different types of computations in the section on Classes-computations. In both cases, we needed a way to sequence operations and saw that a sufficient definition
(at least for computations) was:
class Computation c where
success :: a -> c a
failure :: String -> c a
augment :: c a -> (a -> c b) -> c b
combine :: c a -> c a -> c a
Let's see if this definition will enable us to also perform IO. Essentially, we need a way to represent taking a value out of an action and performing some new operation on it (as in the example from
the section on Functions-io, rephrased slightly):
main = do
s <- readFile "somefile"
putStrLn (show (f s))
But this is exactly what augment does. Using augment, we can write the above code as:
main = -- note the lack of a "do"
readFile "somefile" `augment` \s ->
putStrLn (show (f s))
This certainly seems to be sufficient. And, in fact, it turns out to be more than sufficient.
The definition of a monad is a slightly trimmed-down version of our Computation class. The Monad class has four methods (but the fourth method can be defined in terms of the third):
class Monad m where
return :: a -> m a
fail :: String -> m a
(>>=) :: m a -> (a -> m b) -> m b
(>>) :: m a -> m b -> m b
In this definition, return is equivalent to our success; fail is equivalent to our failure; and >>= (read: "bind" ) is equivalent to our augment. The >> (read: "then" ) method is simply a version of
>>= that ignores the a. This will turn out to be useful; although, as mentioned before, it can be defined in terms of >>=:
a >> x = a >>= \_ -> x
We have hinted that there is a connection between monads and the do notation. Here, we make that relationship concrete. There is actually nothing magic about the do notation – it is simply "syntactic
sugar" for monadic operations.
As we mentioned earlier, using our Computation class, we could define our above program as:
main =
readFile "somefile" `augment` \s ->
putStrLn (show (f s))
But we now know that augment is called >>= in the monadic world. Thus, this program really reads:
main =
readFile "somefile" >>= \s ->
putStrLn (show (f s))
And this is completely valid Haskell at this point: if you defined a function f :: Show a => String -> a, you could compile and run this program)
This suggests that we can translate:
x <- f
g x
into f >>= \x -> g x. This is exactly what the compiler does. Talking about do becomes easier if we do not use implicit layout (see the section on Layout for how to do this). There are four
translation rules:
1. do {e} → e
2. do {e; es} → e >> do {es}
3. do {let decls; es} → let decls in do {es}
4. do {p <- e; es} → let ok p = do {es} ; ok _ = fail "..." in e >>= ok
Again, we will elaborate on these one at a time:
The first translation rule, do {e} → e, states (as we have stated before) that when performing a single action, having a do or not is irrelevant. This is essentially the base case for an inductive
definition of do. The base case has one action (namely e here); the other three translation rules handle the cases where there is more than one action.
This states that do {e; es} → e >> do {es}. This tells us what to do if we have an action (e) followed by a list of actions (es). Here, we make use of the >> function, defined earlier. This rule
simply states that to do {e; es}, we first perform the action e, throw away the result, and then do es.
For instance, if e is putStrLn s for some string s, then the translation of do {e; es} is to perform e (i.e., print the string) and then do es. This is clearly what we want.
This states that do {let decls; es} → let decls in do {es}. This rule tells us how to deal with lets inside of a do statement. We lift the declarations within the let out and do whatever comes after
the declarations.
This states that do {p <- e; es} → let ok p = do {es} ; ok _ = fail "..." in e >>= ok. Again, it is not exactly obvious what is going on here. However, an alternate formulation of this rule, which is
roughly equivalent, is: do {p <- e; es} → e >>= \p -> es. Here, it is clear what is happening. We run the action e, and then send the results into es, but first give the result the name p.
The reason for the complex definition is that p doesn't need to simply be a variable; it could be some complex pattern. For instance, the following is valid code:
foo = do ('a':'b':'c':x:xs) <- getLine
putStrLn (x:xs)
In this, we're assuming that the results of the action getLine will begin with the string "abc" and will have at least one more character. The question becomes what should happen if this pattern
match fails. The compiler could simply throw an error, like usual, for failed pattern matches. However, since we're within a monad, we have access to a special fail function, and we'd prefer to fail
using that function, rather than the "catch all" error function. Thus, the translation, as defined, allows the compiler to fill in the ... with an appropriate error message about the pattern matching
having failed. Apart from this, the two definitions are equivalent.
There are three rules that all monads must obey called the "Monad Laws" (and it is up to you to ensure that your monads obey these rules) :
1. return a >>= f ≡ f a
2. f >>= return ≡ f
3. f >>= (\x -> g x >>= h) ≡ (f >>= g) >>= h
Let's look at each of these individually:
This states that return a >>= f ≡ f a. Suppose we think about monads as computations. This means that if we create a trivial computation that simply returns the value a regardless of anything else
(this is the return a part); and then bind it together with some other computation f, then this is equivalent to simply performing the computation f on a directly.
For example, suppose f is the function putStrLn and a is the string "Hello World." This rule states that binding a computation whose result is "Hello World" to putStrLn is the same as simply printing
it to the screen. This seems to make sense.
In do notation, this law states that the following two programs are equivalent:
law1a = do
x <- return a
f x
law1b = do
f a
The second monad law states that f >>= return ≡ f for some computation f. In other words, the law states that if we perform the computation f and then pass the result on to the trivial return
function, then all we have done is to perform the computation.
That this law must hold should be obvious. To see this, think of f as getLine (reads a string from the keyboard). This law states that reading a string and then returning the value read is exactly
the same as just reading the string.
In do notation, the law states that the following two programs are equivalent:
law2a = do
x <- f
return x
law2b = do
This states that f >>= (\x -> g x >>= h) ≡ (f >>= g) >>= h. At first glance, this law is not as easy to grasp as the other two. It is essentially an associativity law for monads.
Outside the world of monads, a function ${\displaystyle \cdot }$ is associative if ${\displaystyle (f\cdot g)\cdot h=f\cdot (g\cdot h)}$ . For instance, + and * are associative, since bracketing on
these functions doesn't make a difference. On the other hand, - and / are not associative since, for example, ${\displaystyle 5-(3-1)ot =(5-3)-1}$ .
If we throw away the messiness with the lambdas, we see that this law states: f >>= (g >>= h) ≡ (f >>= g) >>= h. The intuition behind this law is that when we string together actions, it doesn't
matter how we group them.
For a concrete example, take f to be getLine. Take g to be an action which takes a value as input, prints it to the screen, reads another string via getLine, and then returns that newly read string.
Take h to be putStrLn.
Let's consider what (\x -> g x >>= h) does. It takes a value called x, and runs g on it, feeding the results into h. In this instance, this means that it's going to take a value, print it, read
another value and then print that. Thus, the entire left hand side of the law first reads a string and then does what we've just described.
On the other hand, consider (f >>= g). This action reads a string from the keyboard, prints it, and then reads another string, returning that newly read string as a result. When we bind this with h
as on the right hand side of the law, we get an action that does the action described by (f >>= g), and then prints the results.
Clearly, these two actions are the same.
While this explanation is quite complicated, and the text of the law is also quite complicated, the actual meaning is simple: if we have three actions, and we compose them in the same order, it
doesn't matter where we put the parentheses. The rest is just notation.
In do notation, the law says that the following two programs are equivalent:
law3a = do
x <- f
do y <- g x
h y
law3b = do
y <- do x <- f
g x
h y
A Simple State Monad
One of the simplest monads that we can craft is a state-passing monad. In Haskell, all state information usually must be passed to functions explicitly as arguments. Using monads, we can effectively
hide some state information.
Suppose we have a function f of type a -> b, and we need to add state to this function. In general, if state is of type state, we can encode it by changing the type of f to a -> state -> (state, b).
That is, the new version of f takes the original parameter of type a and a new state parameter. And, in addition to returning the value of type b, it also returns an updated state, encoded in a
For instance, suppose we have a binary tree defined as:
data Tree a
= Leaf a
| Branch (Tree a) (Tree a)
Now, we can write a simple map function to apply some function to each value in the leaves:
mapTree :: (a -> b) -> Tree a -> Tree b
mapTree f (Leaf a) = Leaf (f a)
mapTree f (Branch lhs rhs) =
Branch (mapTree f lhs) (mapTree f rhs)
This works fine until we need to write a function that numbers the leaves left to right. In a sense, we need to add state, which keeps track of how many leaves we've numbered so far, to the mapTree
function. We can augment the function to something like:
mapTreeState :: (a -> state -> (state, b)) ->
Tree a -> state -> (state, Tree b)
mapTreeState f (Leaf a) state =
let (state', b) = f a state
in (state', Leaf b)
mapTreeState f (Branch lhs rhs) state =
let (state' , lhs') = mapTreeState f lhs state
(state'', rhs') = mapTreeState f rhs state'
in (state'', Branch lhs' rhs')
This is beginning to get a bit unwieldy, and the type signature is getting harder and harder to understand. What we want to do is abstract away the state passing part. That is, the differences
between mapTree and mapTreeState are: (1) the augmented f type, (2) we replaced the type -> Tree b with -> state -> (state, Tree b). Notice that both types changed in exactly the same way. We can
abstract this away with a type synonym declaration:
type State st a = st -> (st, a)
To go along with this type, we write two functions:
returnState :: a -> State st a
returnState a = \st -> (st, a)
bindState :: State st a -> (a -> State st b) ->
State st b
bindState m k = \st ->
let (st', a) = m st
m' = k a
in m' st'
Let's examine each of these in turn. The first function, returnState, takes a value of type a and creates something of type State st a. If we think of the st as the state, and the value of type a as
the value, then this is a function that doesn't change the state and returns the value a.
The bindState function looks distinctly like the interior let declarations in mapTreeState. It takes two arguments. The first argument is an action that returns something of type a with state st. The
second is a function that takes this a and produces something of type b also with the same state. The result of bindState is essentially the result of transforming the a into a b.
The definition of bindState takes an initial state, st. It first applies this to the State st a argument called m. This gives back a new state st' and a value a. It then lets the function k act on a,
producing something of type State st b, called m'. We finally run m' with the new state st'.
We write a new function, mapTreeStateM and give it the type:
mapTreeStateM :: (a -> State st b) -> Tree a -> State st (Tree b)
Using these "plumbing" functions (returnState and bindState) we can write this function without ever having to explicitly talk about the state:
mapTreeStateM f (Leaf a) =
f a `bindState` \b ->
returnState (Leaf b)
mapTreeStateM f (Branch lhs rhs) =
mapTreeStateM f lhs `bindState` \lhs' ->
mapTreeStateM f rhs `bindState` \rhs' ->
returnState (Branch lhs' rhs')
In the Leaf case, we apply f to a and then bind the result to a function that takes the result and returns a Leaf with the new value.
In the Branch case, we recurse on the left-hand-side, binding the result to a function that recurses on the right-hand-side, binding that to a simple function that returns the newly created Branch.
As you have probably guessed by this point, State st is a monad, returnState is analogous to the overloaded return method, and bindState is analogous to the overloaded >>= method. In fact, we can
verify that State st a obeys the monad laws:
Law 1 states: return a >>= f ≡ f a. Let's calculate on the left hand side, substituting our names:
returnState a `bindState` f
\st -> let (st', a) = (returnState a) st
m' = f a
in m' st'
\st -> let (st', a) = (\st -> (st, a)) st
in (f a) st'
\st -> let (st', a) = (st, a)
in (f a) st'
\st -> (f a) st
f a
In the first step, we simply substitute the definition of bindState. In the second step, we simplify the last two lines and substitute the definition of returnState. In the third step, we apply st to
the lambda function. In the fourth step, we rename st' to st and remove the let. In the last step, we eta reduce.
Moving on to Law 2, we need to show that f >>= return ≡ f. This is shown as follows:
f `bindState` returnState
\st -> let (st', a) = f st
in (returnState a) st'
\st -> let (st', a) = f st
in (\st -> (st, a)) st'
\st -> let (st', a) = f st
in (st', a)
\st -> f st
Finally, we need to show that State obeys the third law: f >>= (\x -> g x >>= h) ≡ (f >>= g) >>= h. This is much more involved to show, so we will only sketch the proof here. Notice that we can write
the left-hand-side as:
\st -> let (st', a) = f st
in (\x -> g x `bindState` h) a st'
\st -> let (st', a) = f st
in (g a `bindState` h) st'
\st -> let (st', a) = f st
in (\st' -> let (st'', b) = g a
in h b st'') st'
\st -> let (st' , a) = f st
(st'', b) = g a st'
(st''',c) = h b st''
in (st''',c)
The interesting thing to note here is that we have both action applications on the same let level. Since let is associative, this means that we can put whichever bracketing we prefer and the results
will not change. Of course, this is an informal, "hand waving" argument and it would take us a few more derivations to actually prove, but this gives the general idea.
Now that we know that State st is actually a monad, we'd like to make it an instance of the Monad class. Unfortunately, the straightforward way of doing this doesn't work. We can't write:
instance Monad (State st) where { ... }
This is because you cannot make instances out of non-fully-applied type synonyms. Instead, what we need to do instead is convert the type synonym into a newtype, as:
newtype State st a = State (st -> (st, a))
Unfortunately, this means that we need to do some packing and unpacking of the State constructor in the Monad instance declaration, but it's not terribly difficult:
instance Monad (State state) where
return a = State (\state -> (state, a))
State run >>= action = State run'
where run' st =
let (st', a) = run st
State run'' = action a
in run'' st'
Now, we can write our mapTreeM function as:
mapTreeM :: (a -> State state b) -> Tree a ->
State state (Tree b)
mapTreeM f (Leaf a) = do
b <- f a
return (Leaf b)
mapTreeM f (Branch lhs rhs) = do
lhs' <- mapTreeM f lhs
rhs' <- mapTreeM f rhs
return (Branch lhs' rhs')
which is significantly cleaner than before. In fact, if we remove the type signature, we get the more general type:
mapTreeM :: Monad m => (a -> m b) -> Tree a ->
m (Tree b)
That is, mapTreeM can be run in any monad, not just our State monad.
Now, the nice thing about encapsulating the stateful aspect of the computation like this is that we can provide functions to get and change the current state. These look like:
getState :: State state state
getState = State (\state -> (state, state))
putState :: state -> State state ()
putState new = State (\_ -> (new, ()))
Here, getState is a monadic operation that takes the current state, passes it through unchanged, and then returns it as the value. The putState function takes a new state and produces an action that
ignores the current state and inserts the new one.
Now, we can write our numberTree function as:
numberTree :: Tree a -> State Int (Tree (a, Int))
numberTree tree = mapTreeM number tree
where number v = do
cur <- getState
putState (cur+1)
return (v,cur)
Finally, we need to be able to run the action by providing an initial state:
runStateM :: State state a -> state -> a
runStateM (State f) st = snd (f st)
Now, we can provide an example Tree:
testTree =
(Leaf 'a')
(Leaf 'b')
(Leaf 'c')))
(Leaf 'd')
(Leaf 'e'))
and number it:
State> runStateM (numberTree testTree) 1
Branch (Branch (Leaf ('a',1)) (Branch (Leaf ('b',2))
(Leaf ('c',3)))) (Branch (Leaf ('d',4))
(Leaf ('e',5)))
This may seem like a large amount of work to do something simple. However, note the new power of mapTreeM. We can also print out the leaves of the tree in a left-to-right fashion as:
State> mapTreeM print testTree
This crucially relies on the fact that mapTreeM has the more general type involving arbitrary monads -- not just the state monad. Furthermore, we can write an action that will make each leaf value
equal to its old value as well as all the values preceding:
fluffLeaves tree = mapTreeM fluff tree
where fluff v = do
cur <- getState
putState (v:cur)
return (v:cur)
and can see it in action:
State> runStateM (fluffLeaves testTree) []
Branch (Branch (Leaf "a") (Branch (Leaf "ba")
(Leaf "cba"))) (Branch (Leaf "dcba")
(Leaf "edcba"))
In fact, you don't even need to write your own monad instance and datatype. All this is built in to the Control.Monad.State module. There, our runStateM is called evalState; our getState is called
get; and our putState is called put.
This module also contains a state transformer monad, which we will discuss in the section on Transformer.
It turns out that many of our favorite datatypes are actually monads themselves. Consider, for instance, lists. They have a monad definition that looks something like:
instance Monad [] where
return x = [x]
l >>= f = concatMap f l
fail _ = []
This enables us to use lists in do notation. For instance, given the definition:
cross l1 l2 = do
x <- l1
y <- l2
return (x,y)
we get a cross-product function:
Monads> cross "ab" "def"
It is not a coincidence that this looks very much like the list comprehension form:
Prelude> [(x,y) | x <- "ab", y <- "def"]
List comprehension form is simply an abbreviated form of a monadic statement using lists. In fact, in older versions of Haskell, the list comprehension form could be used for any monad -- not just
lists. However, in the current version of Haskell, this is no longer allowed.
The Maybe type is also a monad, with failure being represented as Nothing and with success as Just. We get the following instance declaration:
instance Monad Maybe where
return a = Just a
Nothing >>= f = Nothing
Just x >>= f = f x
fail _ = Nothing
We can use the same cross product function that we did for lists on Maybes. This is because the do notation works for any monad, and there's nothing specific to lists about the cross function.
Monads> cross (Just 'a') (Just 'b')
Just ('a','b')
Monads> cross (Nothing :: Maybe Char) (Just 'b')
Monads> cross (Just 'a') (Nothing :: Maybe Char)
Monads> cross (Nothing :: Maybe Char)
(Nothing :: Maybe Char)
What this means is that if we write a function (like searchAll from the section on Classes) only in terms of monadic operators, we can use it with any monad, depending on what we mean. Using real
monadic functions (not do notation), the searchAll function looks something like:
searchAll g@(Graph vl el) src dst
| src == dst = return [src]
| otherwise = search' el
where search' [] = fail "no path"
search' ((u,v,_):es)
| src == u =
searchAll g v dst >>= \path ->
return (u:path)
| otherwise = search' es
The type of this function is Monad m => Graph v e -> Int -> Int -> m [Int]. This means that no matter what monad we're using at the moment, this function will perform the calculation. Suppose we have
the following graph:
gr = Graph [(0, 'a'), (1, 'b'), (2, 'c'), (3, 'd')]
[(0,1,'l'), (0,2,'m'), (1,3,'n'), (2,3,'m')]
This represents a graph with four nodes, labelled a,b,c and d. There is an edge from a to both b and c. There is also an edge from both b and c to d. Using the Maybe monad, we can compute the path
from a to d:
Monads> searchAll gr 0 3 :: Maybe [Int]
Just [0,1,3]
We provide the type signature, so that the interpreter knows what monad we're using. If we try to search in the opposite direction, there is no path. The inability to find a path is represented as
Nothing in the Maybe monad:
Monads> searchAll gr 3 0 :: Maybe [Int]
Note that the string "no path" has disappeared since there's no way for the Maybe monad to record this.
If we perform the same impossible search in the list monad, we get the empty list, indicating no path:
Monads> searchAll gr 3 0 :: [[Int]]
If we perform the possible search, we get back a list containing the first path:
Monads> searchAll gr 0 3 :: [[Int]]
You may have expected this function call to return all paths, but, as coded, it does not. See the section on Plus for more about using lists to represent nondeterminism.
If we use the IO monad, we can actually get at the error message, since IO knows how to keep track of error messages:
Monads> searchAll gr 0 3 :: IO [Int]
Monads> it
Monads> searchAll gr 3 0 :: IO [Int]
*** Exception: user error
Reason: no path
In the first case, we needed to type it to get GHCi to actually evaluate the search.
There is one problem with this implementation of searchAll: if it finds an edge that does not lead to a solution, it won't be able to backtrack. This has to do with the recursive call to searchAll
inside of search'. Consider, for instance, what happens if searchAll g v dst doesn't find a path. There's no way for this implementation to recover. For instance, if we remove the edge from node b to
node d, we should still be able to find a path from a to d, but this algorithm can't find it. We define:
gr2 = Graph [(0, 'a'), (1, 'b'), (2, 'c'), (3, 'd')]
[(0,1,'l'), (0,2,'m'), (2,3,'m')]
and then try to search:
Monads> searchAll gr2 0 3
*** Exception: user error
Reason: no path
To fix this, we need a function like combine from our Computation class. We will see how to do this in the section on Plus.
Verify that Maybe obeys the three monad laws.
The type Either String is a monad that can keep track of errors. Write an instance for it, and then try doing the search from this chapter using this monad.
Hint: Your instance declaration should begin: instance Monad (Either String) where.
The Monad/Control.Monad library contains a few very useful monadic combinators, which haven't yet been thoroughly discussed. The ones we will discuss in this section, together with their types, are:
• (=<<) :: (a -> m b) -> m a -> m b
• mapM :: (a -> m b) -> [a] -> m [b]
• mapM_ :: (a -> m b) -> [a] -> m ()
• filterM :: (a -> m Bool) -> [a] -> m [a]
• foldM :: (a -> b -> m a) -> a -> [b] -> m a
• sequence :: [m a] -> m [a]
• sequence_ :: [m a] -> m ()
• liftM :: (a -> b) -> m a -> m b
• when :: Bool -> m () -> m ()
• join :: m (m a) -> m a
In the above, m is always assumed to be an instance of Monad.
In general, functions with an underscore at the end are equivalent to the ones without, except that they do not return any value.
The =<< function is exactly the same as >>=, except it takes its arguments in the opposite order. For instance, in the IO monad, we can write either of the following:
Monads> writeFile "foo" "hello world!" >>
(readFile "foo" >>= putStrLn)
hello world!
Monads> writeFile "foo" "hello world!" >>
(putStrLn =<< readFile "foo")
hello world!
The mapM, filterM and foldM are our old friends map, filter and foldl wrapped up inside of monads. These functions are incredibly useful (particularly foldM) when working with monads. We can use
mapM_, for instance, to print a list of things to the screen:
Monads> mapM_ print [1,2,3,4,5]
We can use foldM to sum a list and print the intermediate sum at each step:
Monads> foldM (\a b ->
putStrLn (show a ++ "+" ++ show b ++
"=" ++ show (a+b)) >>
return (a+b)) 0 [1..5]
Monads> it
The sequence and sequence_ functions simply "execute" a list of actions. For instance:
Monads> sequence [print 1, print 2, print 'a']
Monads> it
Monads> sequence_ [print 1, print 2, print 'a']
Monads> it
We can see that the underscored version doesn't return each value, while the non-underscored version returns the list of the return values.
The liftM function "lifts" a non-monadic function to a monadic function. (Do not confuse this with the lift function used for monad transformers in the section on Transformer.) This is useful for
shortening code (among other things). For instance, we might want to write a function that prepends each line in a file with its line number. We can do this with:
numberFile :: FilePath -> IO ()
numberFile fp = do
text <- readFile fp
let l = lines text
let n = zipWith (\n t -> show n ++ ' ' : t) [1..] l
mapM_ putStrLn n
However, we can shorten this using liftM:
numberFile :: FilePath -> IO ()
numberFile fp = do
l <- lines `liftM` readFile fp
let n = zipWith (\n t -> show n ++ ' ' : t) [1..] l
mapM_ putStrLn n
In fact, you can apply any sort of (pure) processing to a file using liftM. For instance, perhaps we also want to split lines into words; we can do this with:
w <- (map words . lines) `liftM` readFile fp
Note that the parentheses are required, since the (.) function has the same fixity has `liftM`.
Lifting pure functions into monads is also useful in other monads. For instance liftM can be used to apply function inside of Just. For instance:
Monads> liftM (+1) (Just 5)
Just 6
Monads> liftM (+1) Nothing
The when function executes a monadic action only if a condition is met. So, if we only want to print non-empty lines:
Monads> mapM_ (\l -> when (not $ null l) (putStrLn l))
Of course, the same could be accomplished with filter, but sometimes when is more convenient.
Finally, the join function is the monadic equivalent of concat on lists. In fact, when m is the list monad, join is exactly concat. In other monads, it accomplishes a similar task:
Monads> join (Just (Just 'a'))
Just 'a'
Monads> join (Just (Nothing :: Maybe Char))
Monads> join (Nothing :: Maybe (Maybe Char))
Monads> join (return (putStrLn "hello"))
Monads> return (putStrLn "hello")
Monads> join [[1,2,3],[4,5]]
These functions will turn out to be even more useful as we move on to more advanced topics in the chapter Io advanced.
Given only the >>= and return functions, it is impossible to write a function like combine with type c a -> c a -> c a. However, such a function is so generally useful that it exists in another class
called MonadPlus. In addition to having a combine function, instances of MonadPlus also have a "zero" element that is the identity under the "plus" (i.e., combine) action. The definition is:
class Monad m => MonadPlus m where
mzero :: m a
mplus :: m a -> m a -> m a
In order to gain access to MonadPlus, you need to import the Monad module (or Control.Monad in the hierarchical libraries).
In the section on Common, we showed that Maybe and list are both monads. In fact, they are also both instances of MonadPlus. In the case of Maybe, the zero element is Nothing; in the case of lists,
it is the empty list. The mplus operation on Maybe is Nothing, if both elements are Nothing; otherwise, it is the first Just value. For lists, mplus is the same as ++.
That is, the instance declarations look like:
instance MonadPlus Maybe where
mzero = Nothing
mplus Nothing y = y
mplus x _ = x
instance MonadPlus [] where
mzero = []
mplus x y = x ++ y
We can use this class to reimplement the search function we've been exploring, such that it will explore all possible paths. The new function looks like:
searchAll2 g@(Graph vl el) src dst
| src == dst = return [src]
| otherwise = search' el
where search' [] = fail "no path"
search' ((u,v,_):es)
| src == u =
(searchAll2 g v dst >>= \path ->
return (u:path)) `mplus`
search' es
| otherwise = search' es
Now, when we're going through the edge list in search', and we come across a matching edge, not only do we explore this path, but we also continue to explore the out-edges of the current node in the
recursive call to search'.
The IO monad is not an instance of MonadPlus; we're not able to execute the search with this monad. We can see that when using lists as the monad, we (a) get all possible paths in gr and (b) get a
path in gr2.
MPlus> searchAll2 gr 0 3 :: [[Int]]
MPlus> searchAll2 gr2 0 3 :: [[Int]]
You might be tempted to implement this as:
searchAll2 g@(Graph vl el) src dst
| src == dst = return [src]
| otherwise = search' el
where search' [] = fail "no path"
search' ((u,v,_):es)
| src == u = do
path <- searchAll2 g v dst
rest <- search' es
return ((u:path) `mplus` rest)
| otherwise = search' es
But note that this doesn't do what we want. Here, if the recursive call to searchAll2 fails, we don't try to continue and execute search' es. The call to mplus must be at the top level in order for
it to work.
Suppose that we changed the order of arguments to mplus. I.e., the matching case of search' looked like:
search' es `mplus`
(searchAll2 g v dst >>= \path ->
return (u:path))
How would you expect this to change the results when using the list
monad on gr? Why?
Often we want to "piggyback" monads on top of each other. For instance, there might be a case where you need access to both IO operations through the IO monad and state functions through some state
monad. In order to accomplish this, we introduce a MonadTrans class, which essentially "lifts" the operations of one monad into another. You can think of this as stacking monads on top of each other.
This class has a simple method: lift. The class declaration for MonadTrans is:
class MonadTrans t where
lift :: Monad m => m a -> t m a
The idea here is that t is the outer monad and that m lives inside of it. In order to execute a command of type Monad m => m a, we first lift it into the transformer.
The simplest example of a transformer (and arguably the most useful) is the state transformer monad, which is a state monad wrapped around an arbitrary monad. Before, we defined a state monad as:
newtype State state a = State (state -> (state, a))
Now, instead of using a function of type state -> (state, a) as the monad, we assume there's some other monad m and make the internal action into something of type state -> m (state, a). This gives
rise to the following definition for a state transformer:
newtype StateT state m a =
StateT (state -> m (state, a))
For instance, we can think of m as IO. In this case, our state transformer monad is able to execute actions in the IO monad. First, we make this an instance of MonadTrans:
instance MonadTrans (StateT state) where
lift m = StateT (\s -> do a <- m
return (s,a))
Here, lifting a function from the realm of m to the realm of StateT state simply involves keeping the state (the s value) constant and executing the action.
Of course, we also need to make StateT a monad, itself. This is relatively straightforward, provided that m is already a monad:
instance Monad m => Monad (StateT state m) where
return a = StateT (\s -> return (s,a))
StateT m >>= k = StateT (\s -> do
(s', a) <- m s
let StateT m' = k a
m' s')
fail s = StateT (\_ -> fail s)
The idea behind the definition of return is that we keep the state constant and simply return the state/a pair in the enclosed monad. Note that the use of return in the definition of return refers to
the enclosed monad, not the state transformer.
In the definition of bind, we create a new StateT that takes a state s as an argument. First, it applies this state to the first action (StateT m) and gets the new state and answer as a result. It
then runs the k action on this new state and gets a new transformer. It finally applies the new state to this transformer. This definition is nearly identical to the definition of bind for the
standard (non-transformer) State monad described in the section on State.
The fail function passes on the call to fail in the enclosed monad, since state transformers don't natively know how to deal with failure.
Of course, in order to actually use this monad, we need to provide function getT , putT and evalStateT . These are analogous to getState, putState and runStateM from the section on State:
getT :: Monad m => StateT s m s
getT = StateT (\s -> return (s, s))
putT :: Monad m => s -> StateT s m ()
putT s = StateT (\_ -> return (s, ()))
evalStateT :: Monad m => StateT s m a -> s -> m a
evalStateT (StateT m) state = do
(s', a) <- m state
return a
These functions should be straightforward. Note, however, that the result of evalStateT is actually a monadic action in the enclosed monad. This is typical of monad transformers: they don't know how
to actually run things in their enclosed monad (they only know how to lift actions). Thus, what you get out is a monadic action in the inside monad (in our case, IO), which you then need to run
We can use state transformers to reimplement a version of our mapTreeM function from the section on State. The only change here is that when we get to a leaf, we print out the value of the leaf; when
we get to a branch, we just print out "Branch."
mapTreeM action (Leaf a) = do
lift (putStrLn ("Leaf " ++ show a))
b <- action a
return (Leaf b)
mapTreeM action (Branch lhs rhs) = do
lift (putStrLn "Branch")
lhs' <- mapTreeM action lhs
rhs' <- mapTreeM action rhs
return (Branch lhs' rhs')
The only difference between this function and the one from the section on State is the calls to lift (putStrLn ...) as the first line. The lift tells us that we're going to be executing a command in
an enclosed monad. In this case, the enclosed monad is IO, since the command lifted is putStrLn.
The type of this function is relatively complex:
mapTreeM :: (MonadTrans t, Monad (t IO), Show a) =>
(a -> t IO a1) -> Tree a -> t IO (Tree a1)
Ignoring, for a second, the class constraints, this says that mapTreeM takes an action and a tree and returns a tree. This just as before. In this, we require that t is a monad transformer (since we
apply lift in it); we require that t IO is a monad, since we use putStrLn we know that the enclosed monad is IO; finally, we require that a is an instance of show -- this is simply because we use
show to show the value of leaves.
Now, we simply change numberTree to use this version of mapTreeM, and the new versions of get and put, and we end up with:
numberTree tree = mapTreeM number tree
where number v = do
cur <- getT
putT (cur+1)
return (v,cur)
Using this, we can run our monad:
MTrans> evalStateT (numberTree testTree) 0
Leaf 'a'
Leaf 'b'
Leaf 'c'
Leaf 'd'
Leaf 'e'
*MTrans> it
Branch (Branch (Leaf ('a',0))
(Branch (Leaf ('b',1)) (Leaf ('c',2))))
(Branch (Leaf ('d',3)) (Leaf ('e',4)))
One problem not specified in our discussion of MonadPlus is that our search algorithm will fail to terminate on graphs with cycles. Consider:
gr3 = Graph [(0, 'a'), (1, 'b'), (2, 'c'), (3, 'd')]
[(0,1,'l'), (1,0,'m'), (0,2,'n'),
(1,3,'o'), (2,3,'p')]
In this graph, there is a back edge from node b back to node a. If we attempt to run searchAll2, regardless of what monad we use, it will fail to terminate. Moreover, if we move this erroneous edge
to the end of the list (and call this gr4), the result of searchAll2 gr4 0 3 will contain an infinite number of paths: presumably we only want paths that don't contain cycles.
In order to get around this problem, we need to introduce state. Namely, we need to keep track of which nodes we have visited, so that we don't visit them again.
We can do this as follows:
searchAll5 g@(Graph vl el) src dst
| src == dst = do
visited <- getT
putT (src:visited)
return [src]
| otherwise = do
visited <- getT
putT (src:visited)
if src `elem` visited
then mzero
else search' el
search' [] = mzero
search' ((u,v,_):es)
| src == u =
(do path <- searchAll5 g v dst
return (u:path)) `mplus`
search' es
| otherwise = search' es
Here, we implicitly use a state transformer (see the calls to getT and putT) to keep track of visited states. We only continue to recurse, when we encounter a state we haven't yet visited.
Furthermore, when we recurse, we add the current state to our set of visited states.
Now, we can run the state transformer and get out only the correct paths, even on the cyclic graphs:
MTrans> evalStateT (searchAll5 gr3 0 3) [] :: [[Int]]
MTrans> evalStateT (searchAll5 gr4 0 3) [] :: [[Int]]
Here, the empty list provided as an argument to evalStateT is the initial state (i.e., the initial visited list). In our case, it is empty.
We can also provide an execStateT method that, instead of returning a result, returns the final state. This function looks like:
execStateT :: Monad m => StateT s m a -> s -> m s
execStateT (StateT m) state = do
(s', a) <- m state
return s'
This is not so useful in our case, as it will return exactly the reverse of evalStateT (try it and find out!), but can be useful in general (if, for instance, we need to know how many numbers are
used in numberTree).
Write a function searchAll6, based on the code for searchAll2, that, at every entry to the main function (not the recursion over the edge list), prints the search being conducted. For instance, the
output generated for searchAll6 gr 0 3 should look like:
Exploring 0 -> 3
Exploring 1 -> 3
Exploring 3 -> 3
Exploring 2 -> 3
Exploring 3 -> 3
MTrans> it
In order to do this, you will have to define your own list monad
transformer and make appropriate instances of it.
Combine the searchAll5 function (from this section) with the searchAll6 function (from the previous exercise) into a single function called searchAll7. This function should perform IO as in
searchAll6 but should also keep track of state using a state
It turns out that a certain class of parsers are all monads. This makes the construction of parsing libraries in Haskell very clean. In this chapter, we begin by building our own (small) parsing
library in the section on A Simple Parsing Monad and then, in the final section, introduce the Parsec parsing library.
A Simple Parsing Monad
Consider the task of parsing. A simple parsing monad is much like a state monad, where the state is the unparsed string. We can represent this exactly as:
newtype Parser a = Parser
{ runParser :: String -> Either String (String, a) }
We again use Left err to be an error condition. This yields standard instances of Monad and MonadPlus:
instance Monad Parser where
return a = Parser (\xl -> Right (xl,a))
fail s = Parser (\xl -> Left s)
Parser m >>= k = Parser $ \xl ->
case m xl of
Left s -> Left s
Right (xl', a) ->
let Parser n = k a
in n xl'
instance MonadPlus Parser where
mzero = Parser (\xl -> Left "mzero")
Parser p `mplus` Parser q = Parser $ \xl ->
case p xl of
Right a -> Right a
Left err -> case q xl of
Right a -> Right a
Left _ -> Left err
Now, we want to build up a library of parsing "primitives." The most basic primitive is a parser that will read a specific character. This function looks like:
char :: Char -> Parser Char
char c = Parser char'
where char' [] = Left ("expecting " ++ show c ++
" got EOF")
char' (x:xs)
| x == c = Right (xs, c)
| otherwise = Left ("expecting " ++
show c ++ " got " ++
show x)
Here, the parser succeeds only if the first character of the input is the expected character.
We can use this parser to build up a parser for the string "Hello":
helloParser :: Parser String
helloParser = do
char 'H'
char 'e'
char 'l'
char 'l'
char 'o'
return "Hello"
This shows how easy it is to combine these parsers. We don't need to worry about the underlying string -- the monad takes care of that for us. All we need to do is combine these parser primitives. We
can test this parser by using runParser and by supplying input:
Parsing> runParser helloParser "Hello"
Right ("","Hello")
Parsing> runParser helloParser "Hello World!"
Right (" World!","Hello")
Parsing> runParser helloParser "hello World!"
Left "expecting 'H' got 'h'"
We can have a slightly more general function, which will match any character fitting a description:
matchChar :: (Char -> Bool) -> Parser Char
matchChar c = Parser matchChar'
where matchChar' [] =
Left ("expecting char, got EOF")
matchChar' (x:xs)
| c x = Right (xs, x)
| otherwise =
Left ("expecting char, got " ++
show x)
Using this, we can write a case-insensitive "Hello" parser:
ciHelloParser = do
c1 <- matchChar (`elem` "Hh")
c2 <- matchChar (`elem` "Ee")
c3 <- matchChar (`elem` "Ll")
c4 <- matchChar (`elem` "Ll")
c5 <- matchChar (`elem` "Oo")
return [c1,c2,c3,c4,c5]
Of course, we could have used something like matchChar ((=='h') . toLower), but the above implementation works just as well. We can test this function:
Parsing> runParser ciHelloParser "hELlO world!"
Right (" world!","hELlO")
Finally, we can have a function, which will match any character:
anyChar :: Parser Char
anyChar = Parser anyChar'
where anyChar' [] =
Left ("expecting character, got EOF")
anyChar' (x:xs) = Right (xs, x)
On top of these primitives, we usually build some combinators. The many combinator, for instance, will take a parser that parses entities of type a and will make it into a parser that parses entities
of type [a] (this is a Kleene-star operator):
many :: Parser a -> Parser [a]
many (Parser p) = Parser many'
where many' xl =
case p xl of
Left err -> Right (xl, [])
Right (xl',a) ->
let Right (xl'', rest) = many' xl'
in Right (xl'', a:rest)
The idea here is that first we try to apply the given parser, p. If this fails, we succeed but return the empty list. If p succeeds, we recurse and keep trying to apply p until it fails. We then
return the list of successes we've accumulated.
In general, there would be many more functions of this sort, and they would be hidden away in a library, so that users couldn't actually look inside the Parser type. However, using them, you could
build up, for instance, a parser that parses (non-negative) integers:
int :: Parser Int
int = do
t1 <- matchChar isDigit
tr <- many (matchChar isDigit)
return (read (t1:tr))
In this function, we first match a digit (the isDigit function comes from the module Char/Data.Char) and then match as many more digits as we can. We then read the result and return it. We can test
this parser as before:
Parsing> runParser int "54"
Right ("",54)
*Parsing> runParser int "54abc"
Right ("abc",54)
*Parsing> runParser int "a54abc"
Left "expecting char, got 'a'"
Now, suppose we want to parse a Haskell-style list of Ints. This becomes somewhat difficult because, at some point, we're either going to parse a comma or a close brace, but we don't know when this
will happen. This is where the fact that Parser is an instance of MonadPlus comes in handy: first we try one, then we try the other.
Consider the following code:
intList :: Parser [Int]
intList = do
char '['
intList' `mplus` (char ']' >> return [])
where intList' = do
i <- int
r <- (char ',' >> intList') `mplus`
(char ']' >> return [])
return (i:r)
The first thing this code does is parse and open brace. Then, using mplus, it tries one of two things: parsing using intList', or parsing a close brace and returning an empty list.
The intList' function assumes that we're not yet at the end of the list, and so it first parses an int. It then parses the rest of the list. However, it doesn't know whether we're at the end yet, so
it again uses mplus. On the one hand, it tries to parse a comma and then recurse; on the other, it parses a close brace and returns the empty list. Either way, it simply prepends the int it parsed
itself to the beginning.
One thing that you should be careful of is the order in which you supply arguments to mplus. Consider the following parser:
tricky =
mplus (string "Hal") (string "Hall")
You might expect this parser to parse both the words "Hal" and "Hall;" however, it only parses the former. You can see this with:
Parsing> runParser tricky "Hal"
Right ("","Hal")
Parsing> runParser tricky "Hall"
Right ("l","Hal")
This is because it tries to parse "Hal," which succeeds, and then it doesn't bother trying to parse "Hall."
You can attempt to fix this by providing a parser primitive, which detects end-of-file (really, end-of-string) as:
eof :: Parser ()
eof = Parser eof'
where eof' [] = Right ([], ())
eof' xl = Left ("Expecting EOF, got " ++
show (take 10 xl))
You might then rewrite tricky using eof as:
tricky2 = do
s <- mplus (string "Hal") (string "Hall")
return s
But this also doesn't work, as we can easily see:
Parsing> runParser tricky2 "Hal"
Right ("",())
Parsing> runParser tricky2 "Hall"
Left "Expecting EOF, got \"l\""
This is because, again, the mplus doesn't know that it needs to parse the whole input. So, when you provide it with "Hall," it parses just "Hal" and leaves the last "l" lying around to be parsed
later. This causes eof to produce an error message.
The correct way to implement this is:
tricky3 =
mplus (do s <- string "Hal"
return s)
(do s <- string "Hall"
return s)
We can see that this works:
Parsing> runParser tricky3 "Hal"
Right ("","Hal")
Parsing> runParser tricky3 "Hall"
Right ("","Hall")
This works precisely because each side of the mplus knows that it must read the end.
In this case, fixing the parser to accept both "Hal" and "Hall" was fairly simple, due to the fact that we assumed we would be reading an end-of-file immediately afterwards. Unfortunately, if we
cannot disambiguate immediately, life becomes significantly more complicated. This is a general problem in parsing, and has little to do with monadic parsing. The solution most parser libraries
(e.g., Parsec, see the section on Parsec) have adopted is to only recognize "LL(1)" grammars: that means that you must be able to disambiguate the input with a one token look-ahead.
Write a parser intListSpace that will parse int lists but will allow arbitrary white space (spaces, tabs or newlines) between the
commas and brackets.
Given this monadic parser, it is fairly easy to add information regarding source position. For instance, if we're parsing a large file, it might be helpful to report the line number on which an error
occurred. We could do this simply by extending the Parser type and by modifying the instances and the primitives:
newtype Parser a = Parser
{ runParser :: Int -> String ->
Either String (Int, String, a) }
instance Monad Parser where
return a = Parser (\n xl -> Right (n,xl,a))
fail s = Parser (\n xl -> Left (show n ++
": " ++ s))
Parser m >>= k = Parser $ \n xl ->
case m n xl of
Left s -> Left s
Right (n', xl', a) ->
let Parser m2 = k a
in m2 n' xl'
instance MonadPlus Parser where
mzero = Parser (\n xl -> Left "mzero")
Parser p `mplus` Parser q = Parser $ \n xl ->
case p n xl of
Right a -> Right a
Left err -> case q n xl of
Right a -> Right a
Left _ -> Left err
matchChar :: (Char -> Bool) -> Parser Char
matchChar c = Parser matchChar'
where matchChar' n [] =
Left ("expecting char, got EOF")
matchChar' n (x:xs)
| c x =
Right (n+if x=='\n' then 1 else 0
, xs, x)
| otherwise =
Left ("expecting char, got " ++
show x)
The definitions for char and anyChar are not given, since they can be written in terms of matchChar. The many function needs to be modified only to include the new state.
Now, when we run a parser and there is an error, it will tell us which line number contains the error:
Parsing2> runParser helloParser 1 "Hello"
Right (1,"","Hello")
Parsing2> runParser int 1 "a54"
Left "1: expecting char, got 'a'"
Parsing2> runParser intList 1 "[1,2,3,a]"
Left "1: expecting ']' got '1'"
We can use the intListSpace parser from the prior exercise to see that this does in fact work:
Parsing2> runParser intListSpace 1
"[1 ,2 , 4 \n\n ,a\n]"
Left "3: expecting char, got 'a'"
Parsing2> runParser intListSpace 1
"[1 ,2 , 4 \n\n\n ,a\n]"
Left "4: expecting char, got 'a'"
Parsing2> runParser intListSpace 1
"[1 ,\n2 , 4 \n\n\n ,a\n]"
Left "5: expecting char, got 'a'"
We can see that the line number, on which the error occurs, increases as we add additional newlines before the erroneous "a".
As you continue developing your parser, you might want to add more and more features. Luckily, Graham Hutton and Daan Leijen have already done this for us in the Parsec library. This section is
intended to be an introduction to the Parsec library; it by no means covers the whole library, but it should be enough to get you started.
Like our library, Parsec provides a few basic functions to build parsers from characters. These are: char, which is the same as our char; anyChar, which is the same as our anyChar; satisfy, which is
the same as our matchChar; oneOf, which takes a list of Chars and matches any of them; and noneOf, which is the opposite of oneOf.
The primary function Parsec uses to run a parser is parse. However, in addition to a parser, this function takes a string that represents the name of the file you're parsing. This is so it can give
better error messages. We can try parsing with the above functions:
ParsecI> parse (char 'a') "stdin" "a"
Right 'a'
ParsecI> parse (char 'a') "stdin" "ab"
Right 'a'
ParsecI> parse (char 'a') "stdin" "b"
Left "stdin" (line 1, column 1):
unexpected "b"
expecting "a"
ParsecI> parse (char 'H' >> char 'a' >> char 'l')
"stdin" "Hal"
Right 'l'
ParsecI> parse (char 'H' >> char 'a' >> char 'l')
"stdin" "Hap"
Left "stdin" (line 1, column 3):
unexpected "p"
expecting "l"
Here, we can see a few differences between our parser and Parsec: first, the rest of the string isn't returned when we run parse. Second, the error messages produced are much better.
In addition to the basic character parsing functions, Parsec provides primitives for: spaces, which is the same as ours; space which parses a single space; letter, which parses a letter; digit, which
parses a digit; string, which is the same as ours; and a few others.
We can write our int and intList functions in Parsec as:
int :: CharParser st Int
int = do
i1 <- digit
ir <- many digit
return (read (i1:ir))
intList :: CharParser st [Int]
intList = do
char '['
intList' `mplus` (char ']' >> return [])
where intList' = do
i <- int
r <- (char ',' >> intList') `mplus`
(char ']' >> return [])
return (i:r)
First, note the type signatures. The st type variable is simply a state variable that we are not using. In the int function, we use the many function (built in to Parsec) together with the digit
function (also built in to Parsec). The intList function is actually identical to the one we wrote before.
Note, however, that using mplus explicitly is not the preferred method of combining parsers: Parsec provides a <|> function that is a synonym of mplus, but that looks nicer:
intList :: CharParser st [Int]
intList = do
char '['
intList' <|> (char ']' >> return [])
where intList' = do
i <- int
r <- (char ',' >> intList') <|>
(char ']' >> return [])
return (i:r)
We can test this:
ParsecI> parse intList "stdin" "[3,5,2,10]"
Right [3,5,2,10]
ParsecI> parse intList "stdin" "[3,5,a,10]"
Left "stdin" (line 1, column 6):
unexpected "a"
expecting digit
In addition to these basic combinators, Parsec provides a few other useful ones:
• choice takes a list of parsers and performs an or operation (<|>) between all of them.
• option takes a default value of type a and a parser that returns something of type a. It then tries to parse with the parser, but it uses the default value as the return, if the parsing fails.
• optional takes a parser that returns () and optionally runs it.
• between takes three parsers: an open parser, a close parser and a between parser. It runs them in order and returns the value of the between parser. This can be used, for instance, to take care
of the brackets on our intList parser.
• notFollowedBy takes a parser and returns one that succeeds only if the given parser would have failed.
Suppose we want to parse a simple calculator language that includes only plus and times. Furthermore, for simplicity, assume each embedded expression must be enclosed in parentheses. We can give a
datatype for this language as:
data Expr = Value Int
| Expr :+: Expr
| Expr :*: Expr
deriving (Eq, Ord, Show)
And then write a parser for this language as:
parseExpr :: Parser Expr
parseExpr = choice
[ do i <- int; return (Value i)
, between (char '(') (char ')') $ do
e1 <- parseExpr
op <- oneOf "+*"
e2 <- parseExpr
case op of
'+' -> return (e1 :+: e2)
'*' -> return (e1 :*: e2)
Here, the parser alternates between two options (we could have used <|>, but I wanted to show the choice combinator in action). The first simply parses an int and then wraps it up in the Value
constructor. The second option uses between to parse text between parentheses. What it parses is first an expression, then one of plus or times, then another expression. Depending on what the
operator is, it returns either e1 :+: e2 or e1 :*: e2.
We can modify this parser, so that instead of computing an Expr, it simply computes the value:
parseValue :: Parser Int
parseValue = choice
,between (char '(') (char ')') $ do
e1 <- parseValue
op <- oneOf "+*"
e2 <- parseValue
case op of
'+' -> return (e1 + e2)
'*' -> return (e1 * e2)
We can use this as:
ParsecI> parse parseValue "stdin" "(3*(4+3))"
Right 21
Now, suppose we want to introduce bindings into our language. That is, we want to also be able to say "let x = 5 in" inside of our expressions and then use the variables we've defined. In order to do
this, we need to use the getState and setState (or updateState) functions built in to Parsec.
parseValueLet :: CharParser (FiniteMap Char Int) Int
parseValueLet = choice
[ int
, do string "let "
c <- letter
char '='
e <- parseValueLet
string " in "
updateState (\fm -> addToFM fm c e)
, do c <- letter
fm <- getState
case lookupFM fm c of
Nothing -> unexpected ("variable " ++ show c ++
" unbound")
Just i -> return i
, between (char '(') (char ')') $ do
e1 <- parseValueLet
op <- oneOf "+*"
e2 <- parseValueLet
case op of
'+' -> return (e1 + e2)
'*' -> return (e1 * e2)
The int and recursive cases remain the same. We add two more cases, one to deal with let-bindings, the other to deal with usages.
In the let-bindings case, we first parse a "let" string, followed by the character we're binding (the letter function is a Parsec primitive that parses alphabetic characters), followed by its value
(a parseValueLet). Then, we parse the " in " and update the state to include this binding. Finally, we continue and parse the rest.
In the usage case, we simply parse the character and then look it up in the state. However, if it doesn't exist, we use the Parsec primitive unexpected to report an error.
We can see this parser in action using the runParser command, which enables us to provide an initial state:
ParsecI> runParser parseValueLet emptyFM "stdin"
"let c=5 in ((5+4)*c)"
Right 45
*ParsecI> runParser parseValueLet emptyFM "stdin"
"let c=5 in ((5+4)*let x=2 in (c+x))"
Right 63
*ParsecI> runParser parseValueLet emptyFM "stdin"
"((let x=2 in 3+4)*x)"
Right 14
Note that the bracketing does not affect the definitions of the variables. For instance, in the last example, the use of "x" is, in some sense, outside the scope of the definition. However, our
parser doesn't notice this, since it operates in a strictly left-to-right fashion. In order to fix this omission, bindings would have to be removed (see the exercises).
Modify the parseValueLet parser, so that it obeys bracketing. In order to do this, you will need to change the state to something like FiniteMap Char [Int], where the [Int] is a stack of | {"url":"https://en.m.wikibooks.org/wiki/Haskell/YAHT/Monads","timestamp":"2024-11-03T15:24:34Z","content_type":"text/html","content_length":"121138","record_id":"<urn:uuid:777c9cc9-2196-40bd-a52e-9c34704aa64f>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00846.warc.gz"} |
Course for Science and Mathematics Teachers
We cultivate future teachers who can teach knowledge and techniques of science and mathematics, as well as let children know how science and mathematics are interesting, and help understanding the
subjects with a sense of reality.
Fields of work
• Elementary school teachers
• Junior high school teachers(Science or Mathematics)
The following human resources are welcome to the Science and Mathematics Course.
• 1.A person who is willing to solve various problems in modern science and mathematics education.
• 2.A person who is interested or has deep concern in nature science and has devotion in science education through experiments and observations.
• 3.A person who is interested in mathematics and at the same time has strong intentions to become a teacher.
• 4.A person who is willing to deeply understand mathematics and wishes to cultivate active and logical thinking against nature and society.
Main Courses
• Arithmetic/Mathematics Education
• Linear Algebra
• Algebra - Basic
• Algebra - Application
• Elementary Geometry
• Geometry - Basic
• Geometry - Application
• Infinitesimal Calculus
• Infinitesimal Calculus - Practice
• Analysis - Basic
• Analysis - Application
• Probability Theory
• Mathematical Statistics
• Programming - For Beginners
• Programming - For Beginners - Practice
• Science Education
• Basic Physics
• Basic Chemistry
• Basic Biology
• Basic Earth Science
• Physics - Introduction
• Chemistry - Introduction
• Biology - Introduction
• Earth Science - Introduction
• Physics I, II & III
• Chemistry I, II, & III
• Biology I, II & III
• Earth Science I, II & III
• Basic Experiment (Physics, Chemistry, Biology & Earth Science) | {"url":"https://www.akita-u.ac.jp/eduhuman/english/department/dept_edu10.html","timestamp":"2024-11-01T22:56:58Z","content_type":"application/xhtml+xml","content_length":"14561","record_id":"<urn:uuid:b3b783fe-ed54-4d23-b600-29840baf56e6>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00298.warc.gz"} |
“The Problem of Spatial Autocorrelation:”
Cliff and Ord (1969), published forty years ago, marked a turning point in the treatment of spatial autocorrelation in quantitative geography. It provided the framework needed by any applied
researcher to attempt an implementation for a different system, possibly using a different programming language. In this spirit, here we examine how spatial weights have been represented in
implementations and may be reproduced, how the tabulated results in the paper may be reproduced, and how they may be extended to cover simulation.
One of the major assertions of Cliff and Ord (1969) is that their statistic advances the measurement of spatial autocorrelation with respect to Moran (1950) and Geary (1954) because a more general
specification of spatial weights could be used. This more general form has implications both for the preparation of the weights themselves, and for the calculation of the measures. We will look at
spatial weights first, before moving on to consider the measures presented in the paper and some of their subsequent developments. Before doing this, we will put together a data set matching that
used in Cliff and Ord (1969). They provide tabulated data for the counties of the Irish Republic, but omit Dublin from analyses. A shapefile included in this package, kindly made available by Michael
Tiefelsdorf, is used as a starting point:
eire <- as(sf::st_read(system.file("shapes/eire.gpkg", package="spData")[1]), "Spatial")
row.names(eire) <- as.character(eire$names)
#proj4string(eire) <- CRS("+proj=utm +zone=30 +ellps=airy +units=km")
## [1] "SpatialPolygonsDataFrame"
## attr(,"package")
## [1] "sp"
## [1] "A" "towns" "pale" "size" "ROADACC" "OWNCONS" "POPCHG"
## [8] "RETSALE" "INCOME" "names"
and read into a SpatialPolygonsDataFrame — classes used for handling spatial data in are fully described in Roger S. Bivand, Pebesma, and Gómez-Rubio (2008). To this we need to add the data tabulated
in the paper in Table 2, p. 40, here in the form of a text file with added rainfall values from Table 9, p. 49:
fn <- system.file("etc/misc/geary_eire.txt", package="spdep")[1]
ge <- read.table(fn, header=TRUE)
## [1] "serlet" "county" "pagval2_10" "pagval10_50"
## [5] "pagval50p" "cowspacre" "ocattlepacre" "pigspacre"
## [9] "sheeppacre" "townvillp" "carspcap" "radiopcap"
## [13] "retailpcap" "psinglem30_34" "rainfall"
Since we assigned the county names as feature identifiers when reading the shapefiles, we do the same with the extra data, and combine the objects:
row.names(ge) <- as.character(ge$county)
all.equal(row.names(ge), row.names(eire))
## [1] TRUE
eire_ge <- cbind(eire, ge)
Finally, we need to drop the Dublin county omitted in the analyses conducted in Cliff and Ord (1969):
eire_ge1 <- eire_ge[!(row.names(eire_ge) %in% "Dublin"),]
## [1] 25
To double-check our data, let us calculate the sample Beta coefficients, using the formulae given in the paper for sample moments:
skewness <- function(z) {z <- scale(z, scale=FALSE); ((sum(z^3)/length(z))^2)/((sum(z^2)/length(z))^3)}
kurtosis <- function(z) {z <- scale(z, scale=FALSE); (sum(z^4)/length(z))/((sum(z^2)/length(z))^2)}
These differ somewhat from the ways in which skewness and kurtosis are computed in modern statistical software, see for example Joanes and Gill (1998). However, for our purposes, they let us
reproduce Table 3, p. 42:
print(sapply(as(eire_ge1, "data.frame")[13:24], skewness), digits=3)
## pagval2_10 pagval10_50 pagval50p cowspacre ocattlepacre
## 1.675429 1.294978 0.000382 1.682094 0.086267
## pigspacre sheeppacre townvillp carspcap radiopcap
## 1.138387 1.842362 0.472748 0.011111 0.342805
## retailpcap psinglem30_34
## 0.002765 0.068169
print(sapply(as(eire_ge1, "data.frame")[13:24], kurtosis), digits=4)
## pagval2_10 pagval10_50 pagval50p cowspacre ocattlepacre
## 3.790 4.331 1.508 4.294 2.985
## pigspacre sheeppacre townvillp carspcap radiopcap
## 3.754 4.527 2.619 1.865 2.730
## retailpcap psinglem30_34
## 2.188 2.034
print(sapply(as(eire_ge1, "data.frame")[c(13,16,18,19)], function(x) skewness(log(x))), digits=3)
## pagval2_10 cowspacre pigspacre sheeppacre
## 0.68801 0.17875 0.00767 0.04184
print(sapply(as(eire_ge1, "data.frame")[c(13,16,18,19)], function(x) kurtosis(log(x))), digits=4)
## pagval2_10 cowspacre pigspacre sheeppacre
## 2.883 2.799 2.212 2.421
Using the tabulated value of \(23.6\) for the percentage of agricultural holdings above 50 in 1950 in Waterford, the skewness and kurtosis cannot be reproduced, but by comparison with the irishdata
dataset in , it turns out that the value should rather be \(26.6\), which yields the tabulated skewness and kurtosis values.
Before going on, the variables considered are presented in Table \[vars\].
Description of variables in the Geary data set.
pagval2_10 Percentage number agricultural holdings in valuation group £2–£10 (1950)
pagval10_50 Percentage number agricultural holdings in valuation group £10–£50 (1950)
pagval50p Percentage number agricultural holdings in valuation group above £50 (1950)
cowspacre Milch cows per 1000 acres crops and pasture (1952)
ocattlepacre Other cattle per 1000 acres crops and pasture (1952)
pigspacre Pigs per 1000 acres crops and pasture (1952)
sheeppacre Sheep per 1000 acres crops and pasture (1952)
townvillp Town and village population as percentage of total (1951)
carspcap Private cars registered per 1000 population (1952)
radiopcap Radio licences per 1000 population (1952)
retailpcap Retail sales £ per person (1951)
psinglem30_34 Single males as percentage of all males aged 30–34 (1951)
rainfall Average of rainfall for stations in Ireland, 1916–1950, mm
Spatial weights
As a basis for comparison, we will first read the unstandardised weighting matrix given in Table A1, p. 54, of the paper, reading a file corrected for the misprint giving O rather than D as a
neighbour of V:
fn <- system.file("etc/misc/unstand_sn.txt", package="spdep")[1]
unstand <- read.table(fn, header=TRUE)
## from to weight
## Length:110 Length:110 Min. :0.000600
## Class :character Class :character 1st Qu.:0.003225
## Mode :character Mode :character Median :0.007550
## Mean :0.007705
## 3rd Qu.:0.010225
## Max. :0.032400
In the file, the counties are represented by their serial letters, so ordering and conversion to integer index representation is required to reach a representation similar to that of the SpatialStats
module for spatial neighbours:
class(unstand) <- c("spatial.neighbour", class(unstand))
of <- ordered(unstand$from)
attr(unstand, "region.id") <- levels(of)
unstand$from <- as.integer(of)
unstand$to <- as.integer(ordered(unstand$to))
attr(unstand, "n") <- length(unique(unstand$from))
Having done this, we can change its representation to a listw object, assigning an appropriate style (generalised binary) for unstandardised values:
lw_unstand <- sn2listw(unstand)
lw_unstand$style <- "B"
## Characteristics of weights list object:
## Neighbour list object:
## Number of regions: 25
## Number of nonzero links: 110
## Percentage nonzero weights: 17.6
## Average number of links: 4.4
## Weights style: B
## Weights constants summary:
## n nn S0 S1 S2
## B 25 625 0.8476 0.01871808 0.1229232
Note that the values of S0, S1, and S2 correspond closely with those given on page 42 of the paper, \(0.84688672\), \(0.01869986\) and \(0.12267319\). The discrepancies appear to be due to rounding
in the printed table of weights.
The contiguous neighbours represented in this object ought to match those found using poly2nb. However, we see that the reproduced contiguities have a smaller link count:
nb <- poly2nb(eire_ge1)
## Neighbour list object:
## Number of regions: 25
## Number of nonzero links: 108
## Percentage nonzero weights: 17.28
## Average number of links: 4.32
The missing link is between Clare and Kerry, perhaps by the Tarbert–Killimer ferry, but the counties are not contiguous, as Figure \[plot\_nb\] shows:
xx <- diffnb(nb, lw_unstand$neighbours, verbose=TRUE)
## Neighbour difference for region id: Clare in relation to id: Kerry
## Neighbour difference for region id: Kerry in relation to id: Clare
plot(eire_ge1, border="grey60")
plot(nb, coordinates(eire_ge1), add=TRUE, pch=".", lwd=2)
plot(xx, coordinates(eire_ge1), add=TRUE, pch=".", lwd=2, col=3) | {"url":"https://cran.uib.no/web/packages/spdep/vignettes/CO69.html","timestamp":"2024-11-01T22:33:54Z","content_type":"text/html","content_length":"1048938","record_id":"<urn:uuid:fb066a30-5d71-4a6c-9e16-4231f542547e>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00252.warc.gz"} |
Binary Arithmetic Classes?
Does anyone have any binary arithmetic classes they would be willing to share? I’m in the process of writing a class for calculating network subnetting and as IP address and subnet masks are really
binary numbers, doing the operations in binary would be much simpler than what I am doing right now. I’ve got it all working but I figured if someone already has some classes I could use, it would
make my life simpler.
Sure wish Xojo supported things like actual types for binary and hex numbers as opposed to having the use the literal characters which don’t even really work when you convert a 10 based number to
binary using the bin function because then you end up with a string. If Bin gave me a binary number, that would be better.
Oh well…
A number is a number, so I’m not sure I follow.
To add binary “101” (5) and binary “11” (3) you could do this:
Dim b1 As String = “101”
Dim b2 As String = “11”
Dim n1 As Integer = Val("&b" + b1)
Dim n2 As Integer = Val("&b" + b2)
Dim sum As Integer = n1 + n2
Dim binaryValue As String = Bin(sum) // result is “1000” (8)
Paul beat me by 3 seconds
Note the same method works for Hex ("&h") and Octal ("&O")
and heck you can even mix the base types in the same equation if needed
I do have a BitBlock class that treats it’s data as bits instead of bytes, if that would help.
I would be interested in seeing/knowing what you have inside the BitBlock class.
Here you go:
[quote=124366:@Jon Ogden]Does anyone have any binary arithmetic classes they would be willing to share? I’m in the process of writing a class for calculating network subnetting and as IP address and
subnet masks are really binary numbers, doing the operations in binary would be much simpler than what I am doing right now. I’ve got it all working but I figured if someone already has some classes
I could use, it would make my life simpler.
Sure wish Xojo supported things like actual types for binary and hex numbers as opposed to having the use the literal characters which don’t even really work when you convert a 10 based number to
binary using the bin function because then you end up with a string. If Bin gave me a binary number, that would be better.
Oh well…
Jon I wrote subnet calculator class that convers decimal to 32bit decimal words and does the math the. Converts back to decimal. I found that this was the most scalable and efficient way to perform
the calculations.
[quote=124369:@Paul Lefebvre]A number is a number, so I’m not sure I follow.
To add binary “101” (5) and binary “11” (3) you could do this:
Dim b1 As String = “101”
Dim b2 As String = “11”
Dim n1 As Integer = Val("&b" + b1)
Dim n2 As Integer = Val("&b" + b2)
Dim sum As Integer = n1 + n2
Dim binaryValue As String = Bin(sum) // result is “1000” (8)[/quote]
I know and that’s what I am doing. However, why then do we have Integer data types for that matter? Let’s just make everything 64 bit real numbers since you can do everything with just that.
It’s a pain in the but to do all the conversions back and forth and back and forth. And a binary number is NOT a string.
Would you care to share your code?
Look at it this way… ALL Integer number datatypes are a direct BINARY represation of the value…
So what sense/value would it be to create another datatype… when classes/methods/procedures can alter the input/processing and presentation layers so easily? and the same applies to HEX and OCT… why
have special datatype when they are true INTEGERS?
I am at the moment writing a Programmers Calculator (yeah another one)… but in SWIFT, and there I am using the exact same methods as mentioned above.
plug :
An IPV4 address (like 172.16.254.1) is just 4 bytes (32 bit)
Bitwise ands plus a handful of shifts is all you need
172 = &hAC = &b10101100
16 = &h10 = &b00010000
254 = &hFE = &b11111110
1 = &h01 = &b00000001
These are all exactly equivalent ways of visualizing the same value
IPV6 is 128 bit but you can still divide it into bytes & nibbles using bitwise functions & manipulate it
You seem a little confused about how numbers are actually stored/manipulated in your cpu. There are bitwise functions built into Xojo. What more do you need? You’re probably doing more conversions
than you need to do. Mike went through this recently. Perhaps you could review his threads related to IP address manipulations.
Yes I have no problem sharing it with you, but I am at a pool party for my kids back to school thing
Ill post the link to the github project a bit later if that is ok.
My Subnet Calculator (www.intelligentsubnetcalculator.com) MAS app uses these classes.
I sure did and Thanks to Tim and Kem I was able to finish it nicely. I will post as soon as I can this evening.
I originally wrote these subnet calculator classes for an enterprise app I am working on, however it morphed into a user app that I sell on the MAS. It was easier for my to just clone that source
code and strip it down completely until I just had the calculator shell with the Subnet classes. I tried my best to declutter it, but I have quite a bit of “other” UI user code in this project.
I have some custom events that I never actually did use which are available for you to use easily also.
HTH and please let me know if you have any questions.
This project also holds all of my IPv4 and subnet mask user validation code (if you need that).
Please don’t assume you know things about me.
I know numbers are really binary items in the CPU which is why I find it so amusing you cannot actually work in binary in Xojo yet no one seems bothered by it!
Yes, there are bitwise functions but they are for logical operations on bits not arithmetic.
A bitwise AND of 11001100 and 10001011 and the sum of those two numbers is completely different same thing with bitwise OR.
Bitwise AND = 10001000
Bitwise OR = 11111100
Sum = 101010111
[quote=124502:@Jon Ogden]Please don’t assume you know things about me.
I know numbers are really binary items in the CPU which is why I find it so amusing you cannot actually work in binary in Xojo yet no one seems bothered by it!
Yes, there are bitwise functions but they are for logical operations on bits not arithmetic.
A bitwise AND of 11001100 and 10001011 and the sum of those two numbers is completely different same thing with bitwise OR.
Bitwise AND = 10001000
Bitwise OR = 11111100
Sum = 101010111[/quote]
11001100 or 10001011 => 11001111 (looks like you transposed it)
I’m not bothered by it because working in hex decimal octal or binary is easy enough to switch between
Bitwise is for bitwise manipulation not JUST logical and / or / not etc
dim i as integer = &b11001100 // 204
dim j as integer = &b10001011 // 139
dim i1 as integer = i and j
dim i2 as integer = i or j
dim i3 as integer = i + j
Binary Arithmetic consists primarily of two types of operations…
Bitwise : AND, OR, NAND, NOR, XOR, SHL, SHR, ROL, ROR etc.
Math : Add, Subtract, Multiply, Divide (with the last two usually resulting in a truncated integer value)
The MATH can be done using the inbuilt operators
&b11001100 + &b10001011 = &b101010111 (same as 204+139 = 343)
while the bitwise can be done a few different ways
&b11001100 AND &b10001011 = 10001000 (same as 204 AND 139 = 136)
&b11001100 OR &b10001011 = 11001111 (same as 204 OR 139 = 207)
The BITWISE operators can do this and much more… and the more esoteric functions can be derived by combining the existing functions
And if you want to be sticky… EVERYTHING in a computer is done in BINARY math… INTEGER and DOUBLE datatypes are simply a presentation layer to help us poor humans understand… | {"url":"https://forum.xojo.com/t/binary-arithmetic-classes/18866","timestamp":"2024-11-06T05:44:11Z","content_type":"text/html","content_length":"50219","record_id":"<urn:uuid:191e6a23-849d-486a-88b1-f41179340694>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00359.warc.gz"} |
Math Is Fun Forum
Registered: 2010-06-20
Posts: 10,610
Re: Unit Conversion Tool
Registered: 2010-06-20
Posts: 10,610
Re: Unit Conversion Tool
Hi MathsIsFun,
I'm amazed! I'm staggered! It's not just good; it's brilliant!
I'm checking the factors now. (Well not quite now as I'm still typing this; but soon!)
Children are not defined by school ...........The Fonz
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Sometimes I deliberately make mistakes, just to test you! …………….Bob
Registered: 2005-06-28
Posts: 48,355
Re: Unit Conversion Tool
Hi MathsIsFun,
I think the newer version is elaborate and neatly presented.
Every possible detail has been incorporated.
I didn't find any error on the page.
The first version is good too; I think the information detailed in the higher version is much better.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking.
From: Bumpkinland
Registered: 2009-04-12
Posts: 109,606
Re: Unit Conversion Tool
Hi MIF;
Nice idea for the slider on the new one.
Sometimes when I was using the Mass conversions the numbers in purple, next to the slider are missing their units.
In mathematics, you don't understand things. You just get used to them.
If it ain't broke, fix it until it is.
Always satisfy the Prime Directive of getting the right answer above all else.
Re: Unit Conversion Tool
Testing out a new version.
Current version: Unit Conversion Tool
New Version: Unit Conversion Test
What do you like/dislike about each version ?
"The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman
Re: Unit Conversion Tool
Thanks, simron.
I have done my best to research the conversions, but there is always the risk of a mistake.
So if anyone would like to take the time to confirm any of the calculations, that would be good.
"The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman
Real Member
Registered: 2006-10-07
Posts: 237
Re: Unit Conversion Tool
It's great! I especially like the slider part for quick calculations.
Linux FTW
Unit Conversion Tool
I updated the Unit Conversion Tool using Flash.
Any good? (... any bad?)
"The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman
hi again,
I've played with the conversions for half an hour. The only thing I threw up was when I tried to mess around with LY, parsecs and AU along with fathoms and feet. At this point I had 1 fathom =
6.00000002 feet. So just a rounding error caused I suppose by the big numbers I had previously been using. When I cleared and restarted the fathoms came out as expected, so I don't think this is a
problem. Anyone who uses a conversion factor and expects it'll be accurate at the 9th sig fig deserves what they get.
There are so many options, so I haven't done more than scratch the surface but here's what I checked:
electricity (Provoked an interesting discussion with Mrs B about whether a coulomb should be on the same screen as a faraday. So how much is one faraday she asked. When I said 96521.9 coulombs she
remembered. Eh? She didn't remember they measured the same thing but remembers the conversion value to 6sf. Interesting!)
but have a look at
http://www.britannica.com/EBchecked/top … 98/faraday
time ( oh wow ... sidereal day!!)
All seem ok to me.
And then I went to check degrees to radians .
Oh I'm so sorry.
Last edited by Bob (2011-12-01 02:19:16)
Children are not defined by school ...........The Fonz
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Sometimes I deliberately make mistakes, just to test you! …………….Bob
Re: Unit Conversion Tool
LOL ... will add angles.
Thanks everyone!
"The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman
Registered: 2022-05-31
Posts: 1
Re: Unit Conversion Tool
There lies the mistake.
As krassi_holmz rightly pointed out in the previous post,
[cached] 0.019073486328125 ms
[cached] 0.0061988830566406 ms
[cached] 0.0059604644775391 ms
It would not be right to say a=b.
When square root is being taken on both sides of an equation, the plus or minus/minus or plus sign has an important role to play, it cannot be overlooked.
When you had taken the square roots of the LHS and the RHS in steps 2, 3, you have taken only the positive values. Remember square root on 1 is not +1 alone, it is ±1.
It appears to me that if one wants to make progress in mathematics, one should study the masters and not the pupils. - Niels Henrik Abel.
Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking. | {"url":"https://www.mathisfunforum.com/viewtopic.php?pid=425502","timestamp":"2024-11-10T18:31:10Z","content_type":"application/xhtml+xml","content_length":"21673","record_id":"<urn:uuid:c3e5576e-c5e0-42bd-9633-2229cbf8723f>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00362.warc.gz"} |
A primer on Isotope Notation
A Primer on Isotope Notation
The isotope ratio Fractional abundance, ^aF
Relationship between ^aR and ^aF The delta notation, d^aX
Change of isotopic reference scale Measures of fractionation: a, 1000ln(a), e, and D
The fractionation factor a Natural log notation, 1000ln(a)
The epsilon notation, e The capital delta notation, D
The isotope ratio:
The relative abundance of two isotopes a and b of element X can be expressed as an isotopic ratio, ^aR:
By convention, the more abundant isotope is placed in the denominator. This notation is not particularly helpful in itself however, as changes in R could result from changes in either ^aX or ^bX. It
is however the basis for several expressions that are very useful.
Fractional abundance, ^aF:
This expression is particularly useful in artificially enriched systems where the ratio of ^aX to ^bX is increased by the intentional addition of a pure source of one of the two isotopes (usually the
heavy isotope). The pure source is referred to as a spike, the process of adding the source as "spiking" a sample. Spikes allow us to monitor the movement of isotopes from one reservoir to another in
response to one of more chemical reactions.
Relationship between ^aR and ^aF:
Note that:
The delta notation, d^aX.
In many natural systems, the isotopic ratio ^aR exhibits variability in the range of the third to fifth decimal place. Numbers this small are best presented in terms of per mil, or parts per thousand
(‰). The ratio in a sample, denoted here as ^aR[x ]can be expressed relative to the isotopic ratio of a standard ^aR[std ]using a difference relationship knows as the delta notation (d^aX):
or alternatively,
Change of isotopic reference scale:
This relationship can be used to convert an isotopic value from one reference scale to another reference scale.
Measures of fractionation: a, 1000ln(a), e, and D
We are interested in measuring the isotopic offset between substances. Such offsets arise from the expression of an isotope effect due to equilibrium or kinetic fractionation during a physical
process or chemical reaction. The size of this isotopic fractionation can be expressed in several ways.
The fractionation factor a is defined as:
where K is the equilibrium constant for the associated reaction and n is the number of atoms exchanged. For simplicity, isotope exchange equations are usually written such that n=1 so that a =K. It
is worth noting that most values of a are close to 1, with variability in the third through fifth decimal place. We can see that this is the case if we express a in terms of the delta notation
Because a is close to unity, it is convenient to express fractionation in ways that accentuate the differences between d[A] and d[B]. This can be done in one of three ways which yield approximately
the same value for the per mil fractionation.
Natural log notation, 1000ln(a):
One would think that this notation would have its own symbol, but surprisingly, it does not! We can approximate the fractionation in per mil from the fractionation factor a by taking the natural log
of a and multiplying it by 1000. This is written simply as 1000ln(a). Mathematically, this works because we can think of a as being composed of 1 + e, a small deviation. Now if we take the natural
log of 1+ e, the result can be expanded as a type of infinite series known as a Maclaurin series in which the first term e, is the largest and thus serves as a reasonable approximation of the natural
log of a. If we retain more terms, we would obtain a more accurate result, but by convention, only the first term is retained:
Multiplying by 1000 yields the result in per mil. But since we are interested in e anyway, there is an easier way to express the per mil fractionation.
The epsilon notation, e:
The epsilon notation has the advantage over the 1000ln(a) notation in that it is an exact expression of the per mil fractionation. There is a final way of determining the per mil fractionation. This
last method is the least accurate, but most commonly applied because of its simplicity.
The capital delta notation, D:
This method is least accurate because the errors in the two isotopic measurements do not cancel as is the case for random errors when calculating a ratio. In most cases, however, it is sufficiently
Back to the Paleoceanography Homepage | {"url":"https://www.personal.kent.edu/~jortiz/paleoceanography/isotopic_primer.html","timestamp":"2024-11-04T02:31:23Z","content_type":"text/html","content_length":"18839","record_id":"<urn:uuid:f401d9ea-d2aa-4c34-a80e-7e02e4238d9c>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00041.warc.gz"} |